Skip to content

Python API Release Notes

1.1.1

Python SDK


Version 1.1.1 of the relationalai Python package is now available!

To upgrade, activate your virtual environment and run the following command:

pip install --upgrade relationalai

New Features and Enhancements

  • rai jobs:list and rai reasoners:list now make pagination explicit when you use --limit and --offset. Before, paginated table output could look like a complete result set even when it only showed one slice. Now the CLI adds a footer when needed, including a copyable next-page command when more results may be available and a clearer message when an offset is past the end of the result set.

  • rai now checks whether a newer PyRel release is available and shows an upgrade prompt when your installed version is behind. Standard CLI invocations refresh a cached check in the background, and rai --version checks synchronously so the notice can appear immediately. The notice is suppressed for JSON output, CI, non-interactive runs, and by setting RAI_NO_UPDATE_CHECK=1 in your environment variables.

1.1.0

Python SDK


Version 1.1.0 of the relationalai Python package is now available!

To upgrade, activate your virtual environment and run the following command:

pip install --upgrade relationalai

New Features and Enhancements

Bug Fixes

  • Fixed jobs filtering in methods such as JobsClient.list() and commands such as rai jobs:list. Before, if you ran them with the reasoner type set to Predictive or Prescriptive, the output could leave out matching jobs when PyRel was configured for Direct Access. Now those jobs are included, as expected.

1.0.19

Python SDK


Version 1.0.19 of the relationalai Python package is now available!

To upgrade, activate your virtual environment and run the following command:

pip install --upgrade relationalai

New Features and Enhancements

  • Graph.is_acyclic() is now much faster on many large graphs, especially large trees and grids. In benchmarks for the change, the runtime for the largest tested grid graph dropped from about 751 seconds to 47 seconds.

Bug Fixes

  • Removed unsupported std.re APIs. These APIs would raise errors if you tried to use them. All documentation pertaining to std.re has also been removed.

  • Fixed active profile selection so that the RAI_ACTIVE_PROFILE environment variable takes precedence over active_profile in raiconfig.yaml.

  • Fixed prescriptive models that combined filtered aggregates in one expression. Before, PyRel could leak one aggregate's filter into another, so objectives and constraints could silently drop instead of being sent to the solver. For example:

    from relationalai.semantics import Float, Integer, Model
    from relationalai.semantics.reasoners.prescriptive import Problem
    
    model = Model("ScopedAggregates")
    X = model.Concept("X", identify_by={"i": Integer})
    X.v = model.Property(f"{X} has {Float:v}")
    
    model.define(X.new(i=1), X.new(i=2))
    
    problem = Problem(model, Float)
    problem.solve_for(X.v, name=["v", X.i], lower=0, upper=10)
    
    v = Float.ref()
    problem.satisfy(model.require(sum(v).where(X.v(v), X.i == 1) == 1.0))
    problem.satisfy(model.require(sum(v).where(X.v(v), X.i == 2) == 4.0))
    
    # The bug was triggered by combining two filtered aggregates in one expression.
    problem.minimize(sum(X.v).where(X.i == 1) + sum(X.v).where(X.i == 2))
    problem.satisfy(model.require(sum(X.v).where(X.i == 1) <= sum(X.v).where(X.i == 2)))
    
    problem.solve("highs")
    
    print(model.select(problem.num_min_objectives().alias("objective_count")).to_df())
    print(model.select(problem.num_constraints().alias("constraint_count")).to_df())
    print("objective_value:", problem.solve_info().objective_value)
    

    Output before the fix:

      objective_count
    0                0
      constraint_count
    0                 2
    objective_value: 0.0
    

    Output after the fix:

      objective_count
    0                1
      constraint_count
    0                 3
    objective_value: 5.0
    
  • Fixed model.data() calls created on the same source line, such as inside a loop. model.data() creates a temporary data source each time you call it. Before, if two calls happened on the same line, PyRel could mistake them for the same source. That could make later queries fail even when the input data was valid.

    For example:

    from relationalai.semantics import Integer, Model, String
    
    m = Model("DataLoop")
    Item = m.Concept("Item", identify_by={"id": Integer, "name": String})
    
    batches = [
        pd.DataFrame([(1, "a"), (2, "b")], columns=["id", "name"]),
        pd.DataFrame([(3, "c"), (4, "d")], columns=["id", "name"]),
    ]
    
    for batch in batches:
        m.define(Item.new(m.data(batch).to_schema()))
    
    print(m.select(Item.id, Item.name).to_df())
    

    Output before the fix:

    RelQueryError: Query error
    

    Output after the fix:

      id name
    0  1   a
    1  2   b
    2  3   c
    3  4   d
    

    Now PyRel gives each same-line data source its own deterministic sequence number, so queries compile and run correctly.

1.0.18

Python SDK


Version 1.0.18 of the relationalai Python package is now available!

To upgrade, activate your virtual environment and run the following command:

pip install --upgrade relationalai

New Features and Enhancements

  • You can now pass an existing Snowpark session to create_config() with snowflake_session=.

    For example:

    from snowflake.snowpark import Session
    from relationalai.config import create_config
    
    snowflake_connection_parameters = {...}
    snowpark_session = Session.builder.configs(snowflake_connection_parameters).create()
    
    config = create_config(snowflake_session=snowpark_session)
    assert config.get_session() is snowpark_session
    

    :::note If you already load config from raiconfig.yaml or another supported config source, queries use the injected session instead of the Snowflake connection from that config. :::

    This change makes it easier to use PyRel in environments where you already have a Snowpark session.

Bug Fixes

  • Fixed nested lookups that could drop rows when the inner lookup had no match. Before, PyRel could skip rows instead of keeping the row and showing NULL for the missing value. For example:

    from relationalai.semantics import Integer, Model, String
    
    m = Model("CustomerModel")
    
    Customer = m.Concept("Customer", identify_by={"id": Integer})
    Customer.name = m.Property(f"{Customer} has name {String:name}")
    ReferralCode = m.Relationship(f"{Integer:customer_id} maps to {String:referral_code}")
    
    m.define(
        Customer.new(id=1, name="Alice"),
        Customer.new(id=2, name="Bob"),
        ReferralCode(1, "ABC123")
    )
    
    customer_id = Integer.ref()
    code = String.ref()
    
    q = (
      m.where(
        Customer.id == customer_id,
        referral_code := m.where(ReferralCode(customer_id, code)).select(code),
      )
      .select(Customer.name, referral_code.alias("referral_code"))
    )
    
    print(q.to_df())
    

    Output before the fix:

      name referral_code
    0 Alice ABC123
    

    Output after the fix:

      name referral_code
    0 Alice ABC123
    1 Bob   NULL
    
  • Fixed datetime.date.range() and datetime.datetime.range() when the end value comes from a DSL value like a Concept, Property or Relationship instead of a regular Python date or datetime. Before, the query could return one combined date range instead of a separate date range for each entity. For example:

    from relationalai.semantics import Date, Integer, Model, define, select, std
    
    m = Model("dates")
    
    Foo = m.Concept("Foo", identify_by={"id": Integer})
    Foo.end_date = m.Property(f"{Foo} has {Date:end_date}")
    
    define(
        Foo.new(id=1, end_date=dt.date(2020, 1, 2)),
        Foo.new(id=2, end_date=dt.date(2020, 1, 4)),
    )
    
    select(
        std.datetime.date.range(dt.date(2020, 1, 1), Foo.end_date, freq="D")
    ).to_df()
    

    Before, that query could return one combined date range instead of a separate date range for each Foo row:

    # Before the fix:
      result
    0 2020-01-01
    1 2020-01-02
    2 2020-01-03
    3 2020-01-04
    

    Now there are duplicate dates because both Foo rows contribute to the result:

    # After the fix:
      result
    0 2020-01-01
    2 2020-01-02
    3 2020-01-02
    4 2020-01-03
    5 2020-01-04
    
  • Fixed inferred field naming for numeric aliases such as Integer. Before, if you let PyRel infer a field name from one of those types, it would generate a name like number_38_0 instead of integer, so string lookups such as Scores["integer"] would fail with a KeyError exception. For example:

    from relationalai.semantics import Integer, Model, String
    
    m = Model("scores")
    # The Integer field is unnamed here, so PyRel infers its field name.
    Scores = m.Relationship(f"{String:name} has {Integer}")
    
    # Prior to the fix, this lookup would raise a KeyError.
    m.select(Scores["integer"]).to_df()
    

    Now PyRel infers the correct field name, so Scores["integer"] resolves to the Integer field.

1.0.17

Python SDK


Version 1.0.17 of the relationalai Python package is now available!

To upgrade, activate your virtual environment and run the following command:

pip install --upgrade relationalai

New Features and Enhancements

  • PyRel now supports math.round(). You can round to the nearest integer or to a specified number of decimal places inside semantics expressions.

  • rai debugger now makes compiler output easier to inspect. You can choose which compiler steps to show in one dialog, see clearer color coding, and view job IDs directly in the UI. Before, you had to toggle those steps one by one and query cards did not show the job ID.

  • You can no longer create a property or relationship on a concept named Relationship, Property, Concept, or Table. If you try to use one of these names, PyRel raises an error such as:

    'Relationship' is a reserved name and cannot be used as a relationship name on concept 'Person'.
    

    For example:

    from relationalai.semantics import Model, String
    
    m = Model()
    Person = m.Concept("Person")
    
    # Allowed
    Person.relationship = m.Property(f"{Person} has {String:relationship}")
    
    # Raises the reserved-name error
    Person.Relationship = m.Property(f"{Person} has {String:relationship}")
    

Bug Fixes

  • Fixed a slowdown when you ran the first query on a second model in the same Python process. Before, PyRel could check data sources from all models instead of only the model you were querying. Now each model keeps its own data sources, so queries check only the relevant ones.

  • Fixed exports to Snowflake tables when you declare the destination schema with Model.Table(). Before, if the destination schema had more columns than the current export produced, PyRel could leave out the extra declared columns. Now it keeps those columns and fills missing values with NULL, so later writes to the same table can still succeed.

    For example:

    from relationalai.semantics import Integer, Model, String
    
    m = Model("MyModel")
    Order = m.Concept("Order", identify_by={"id": Integer})
    Order.status = m.Property(f"{Order} has {String:status}")
    
    # This query exports only order_id and status.
    q = m.select(Order.id.alias("order_id"), Order.status)
    
    # The destination schema also declares shipped_at.
    out = m.Table(
        "ANALYTICS.PUBLIC.ORDER_EXPORT",
        schema={
            "order_id": Integer,
            "status": String,
            "shipped_at": String,
        },
    )
    
    q.into(out).exec()
    # In 1.0.17, shipped_at stays in the exported Snowflake table and is filled with NULL.
    
  • Fixed fully qualified Snowflake names that use quoted lowercase or mixed-case object names. Before, PyRel could strip the quotes from names such as ANALYTICS.PUBLIC."orders" or ANALYTICS.PUBLIC."SalesSummary". That could make Snowflake read them as ANALYTICS.PUBLIC.ORDERS or ANALYTICS.PUBLIC.SALESSUMMARY, which might be different objects or might not even exist. Now PyRel preserves those quotes.