--- /srv/reproducible-results/rbuild-debian/r-b-build.9RZY0lOO/b1/sqlalchemy_2.0.32+ds1-1_i386.changes +++ /srv/reproducible-results/rbuild-debian/r-b-build.9RZY0lOO/b2/sqlalchemy_2.0.32+ds1-1_i386.changes ├── Files │ @@ -1,5 +1,5 @@ │ │ - 98a02c5c49bd80e90e3e1d1c45d8c8b2 3956068 doc optional python-sqlalchemy-doc_2.0.32+ds1-1_all.deb │ + a9d07aad0784f5e68858537e33b02d63 3956116 doc optional python-sqlalchemy-doc_2.0.32+ds1-1_all.deb │ 6be7382861298efb5a82b60405fa7f83 1748792 debug optional python3-sqlalchemy-ext-dbgsym_2.0.32+ds1-1_i386.deb │ b5c87ce5c170577275eeeed1855128e0 219664 python optional python3-sqlalchemy-ext_2.0.32+ds1-1_i386.deb │ e1c78ec120d9d481e2a5c4c579530013 1196072 python optional python3-sqlalchemy_2.0.32+ds1-1_all.deb ├── python-sqlalchemy-doc_2.0.32+ds1-1_all.deb │ ├── file list │ │ @@ -1,3 +1,3 @@ │ │ -rw-r--r-- 0 0 0 4 2024-08-23 07:52:58.000000 debian-binary │ │ --rw-r--r-- 0 0 0 13920 2024-08-23 07:52:58.000000 control.tar.xz │ │ --rw-r--r-- 0 0 0 3941956 2024-08-23 07:52:58.000000 data.tar.xz │ │ +-rw-r--r-- 0 0 0 13908 2024-08-23 07:52:58.000000 control.tar.xz │ │ +-rw-r--r-- 0 0 0 3942016 2024-08-23 07:52:58.000000 data.tar.xz │ ├── control.tar.xz │ │ ├── control.tar │ │ │ ├── ./md5sums │ │ │ │ ├── ./md5sums │ │ │ │ │┄ Files differ │ ├── data.tar.xz │ │ ├── data.tar │ │ │ ├── ./usr/share/doc/python-sqlalchemy-doc/html/changelog/changelog_14.html │ │ │ │ @@ -9239,15 +9239,22 @@ │ │ │ │
See also
│ │ │ │RowProxy is no longer a “proxy”; is now called Row and behaves like an enhanced named tuple
│ │ │ │References: #4710
│ │ │ │ │ │ │ │ │ │ │ │ -[engine] [change] [performance] [py3k] ¶
Disabled the “unicode returns” check that runs on dialect startup when │ │ │ │ +
[engine] [performance] ¶
The pool “pre-ping” feature has been refined to not invoke for a DBAPI │ │ │ │ +connection that was just opened in the same checkout operation. pre ping │ │ │ │ +only applies to a DBAPI connection that’s been checked into the pool │ │ │ │ +and is being checked out again.
│ │ │ │ +References: #4524
│ │ │ │ + │ │ │ │ +[engine] [performance] [change] [py3k] ¶
Disabled the “unicode returns” check that runs on dialect startup when │ │ │ │ running under Python 3, which for many years has occurred in order to test │ │ │ │ the current DBAPI’s behavior for whether or not it returns Python Unicode │ │ │ │ or Py2K strings for the VARCHAR and NVARCHAR datatypes. The check still │ │ │ │ occurs by default under Python 2, however the mechanism to test the │ │ │ │ behavior will be removed in SQLAlchemy 2.0 when Python 2 support is also │ │ │ │ removed.
│ │ │ │This logic was very effective when it was needed, however now that Python 3
│ │ │ │ @@ -9258,21 +9265,14 @@
│ │ │ │ dialect flags by setting the dialect level flag returns_unicode_strings
│ │ │ │ to one of String.RETURNS_CONDITIONAL
or
│ │ │ │ String.RETURNS_BYTES
, both of which will enable Unicode conversion
│ │ │ │ even under Python 3.
References: #5315
│ │ │ │ │ │ │ │[engine] [performance] ¶
The pool “pre-ping” feature has been refined to not invoke for a DBAPI │ │ │ │ -connection that was just opened in the same checkout operation. pre ping │ │ │ │ -only applies to a DBAPI connection that’s been checked into the pool │ │ │ │ -and is being checked out again.
│ │ │ │ -References: #4524
│ │ │ │ - │ │ │ │ -[engine] [bug] ¶
Revised the Connection.execution_options.schema_translate_map
│ │ │ │ feature such that the processing of the SQL statement to receive a specific
│ │ │ │ schema name occurs within the execution phase of the statement, rather than
│ │ │ │ at the compile phase. This is to support the statement being efficiently
│ │ │ │ cached. Previously, the current schema being rendered into the statement
│ │ │ │ for a particular run would be considered as part of the cache key itself,
│ │ │ │ meaning that for a run against hundreds of schemas, there would be hundreds
│ │ │ │ ├── html2text {}
│ │ │ │ │ @@ -6354,15 +6354,21 @@
│ │ │ │ │ returned by the ResultProxy is now the LegacyRow subclass, which maintains
│ │ │ │ │ mapping/tuple hybrid behavior, however the base _R_o_w class now behaves more
│ │ │ │ │ fully like a named tuple.
│ │ │ │ │ See also
│ │ │ │ │ _R_o_w_P_r_o_x_y_ _i_s_ _n_o_ _l_o_n_g_e_r_ _a_ _“_p_r_o_x_y_”_;_ _i_s_ _n_o_w_ _c_a_l_l_e_d_ _R_o_w_ _a_n_d_ _b_e_h_a_v_e_s_ _l_i_k_e_ _a_n_ _e_n_h_a_n_c_e_d
│ │ │ │ │ _n_a_m_e_d_ _t_u_p_l_e
│ │ │ │ │ References: _#_4_7_1_0
│ │ │ │ │ -[[eennggiinnee]] [[cchhaannggee]] [[ppeerrffoorrmmaannccee]] [[ppyy33kk]] _¶
│ │ │ │ │ +[[eennggiinnee]] [[ppeerrffoorrmmaannccee]] _¶
│ │ │ │ │ +The pool “pre-ping” feature has been refined to not invoke for a DBAPI
│ │ │ │ │ +connection that was just opened in the same checkout operation. pre ping only
│ │ │ │ │ +applies to a DBAPI connection that’s been checked into the pool and is being
│ │ │ │ │ +checked out again.
│ │ │ │ │ +References: _#_4_5_2_4
│ │ │ │ │ +[[eennggiinnee]] [[ppeerrffoorrmmaannccee]] [[cchhaannggee]] [[ppyy33kk]] _¶
│ │ │ │ │ Disabled the “unicode returns” check that runs on dialect startup when running
│ │ │ │ │ under Python 3, which for many years has occurred in order to test the current
│ │ │ │ │ DBAPI’s behavior for whether or not it returns Python Unicode or Py2K strings
│ │ │ │ │ for the VARCHAR and NVARCHAR datatypes. The check still occurs by default under
│ │ │ │ │ Python 2, however the mechanism to test the behavior will be removed in
│ │ │ │ │ SQLAlchemy 2.0 when Python 2 support is also removed.
│ │ │ │ │ This logic was very effective when it was needed, however now that Python 3 is
│ │ │ │ │ @@ -6370,20 +6376,14 @@
│ │ │ │ │ datatypes. In the unlikely case that a third party DBAPI does not support this,
│ │ │ │ │ the conversion logic within _S_t_r_i_n_g is still available and the third party
│ │ │ │ │ dialect may specify this in its upfront dialect flags by setting the dialect
│ │ │ │ │ level flag returns_unicode_strings to one of String.RETURNS_CONDITIONAL or
│ │ │ │ │ String.RETURNS_BYTES, both of which will enable Unicode conversion even under
│ │ │ │ │ Python 3.
│ │ │ │ │ References: _#_5_3_1_5
│ │ │ │ │ -[[eennggiinnee]] [[ppeerrffoorrmmaannccee]] _¶
│ │ │ │ │ -The pool “pre-ping” feature has been refined to not invoke for a DBAPI
│ │ │ │ │ -connection that was just opened in the same checkout operation. pre ping only
│ │ │ │ │ -applies to a DBAPI connection that’s been checked into the pool and is being
│ │ │ │ │ -checked out again.
│ │ │ │ │ -References: _#_4_5_2_4
│ │ │ │ │ [[eennggiinnee]] [[bbuugg]] _¶
│ │ │ │ │ Revised the _C_o_n_n_e_c_t_i_o_n_._e_x_e_c_u_t_i_o_n___o_p_t_i_o_n_s_._s_c_h_e_m_a___t_r_a_n_s_l_a_t_e___m_a_p feature such that
│ │ │ │ │ the processing of the SQL statement to receive a specific schema name occurs
│ │ │ │ │ within the execution phase of the statement, rather than at the compile phase.
│ │ │ │ │ This is to support the statement being efficiently cached. Previously, the
│ │ │ │ │ current schema being rendered into the statement for a particular run would be
│ │ │ │ │ considered as part of the cache key itself, meaning that for a run against
│ │ │ ├── ./usr/share/doc/python-sqlalchemy-doc/html/orm/examples.html
│ │ │ │┄ Ordering differences only
│ │ │ │ @@ -319,29 +319,29 @@
│ │ │ │
│ │ │ │
Examples illustrating the asyncio engine feature of SQLAlchemy.
│ │ │ │Listing of files:
async_orm_writeonly.py - Illustrates using write only relationships for simpler handling │ │ │ │ +of ORM collections under asyncio.
│ │ │ │ +async_orm.py - Illustrates use of the sqlalchemy.ext.asyncio.AsyncSession
object
│ │ │ │ for asynchronous ORM use.
basic.py - Illustrates the asyncio engine / connection interface.
│ │ │ │ +greenlet_orm.py - Illustrates use of the sqlalchemy.ext.asyncio.AsyncSession object │ │ │ │ +for asynchronous ORM use, including the optional run_sync() method.
│ │ │ │async_orm_writeonly.py - Illustrates using write only relationships for simpler handling │ │ │ │ -of ORM collections under asyncio.
│ │ │ │ +basic.py - Illustrates the asyncio engine / connection interface.
│ │ │ │gather_orm_statements.py - Illustrates how to run many statements concurrently using asyncio.gather()
│ │ │ │ along many asyncio database connections, merging ORM results into a single
│ │ │ │ AsyncSession
.
greenlet_orm.py - Illustrates use of the sqlalchemy.ext.asyncio.AsyncSession object │ │ │ │ -for asynchronous ORM use, including the optional run_sync() method.
│ │ │ │ -An example of persistence for a directed graph structure. The
│ │ │ │ graph is stored as a collection of edges, each referencing both a
│ │ │ │ @@ -378,37 +378,37 @@
│ │ │ │ subclassing the HasAddresses
mixin, which ensures that the
│ │ │ │ parent class is provided with an addresses
collection
│ │ │ │ which contains Address
objects.
The discriminator_on_association.py and generic_fk.py scripts │ │ │ │ are modernized versions of recipes presented in the 2007 blog post │ │ │ │ Polymorphic Associations with SQLAlchemy.
│ │ │ │Listing of files:
discriminator_on_association.py - Illustrates a mixin which provides a generic association │ │ │ │ +using a single target table and a single association table, │ │ │ │ +referred to by all parent tables. The association table │ │ │ │ +contains a “discriminator” column which determines what type of │ │ │ │ +parent object associates to each particular row in the association │ │ │ │ +table.
│ │ │ │ +table_per_related.py - Illustrates a generic association which persists association │ │ │ │ +objects within individual tables, each one generated to persist │ │ │ │ +those objects on behalf of a particular parent class.
│ │ │ │ +generic_fk.py - Illustrates a so-called “generic foreign key”, in a similar fashion │ │ │ │ to that of popular frameworks such as Django, ROR, etc. This │ │ │ │ approach bypasses standard referential integrity │ │ │ │ practices, in that the “foreign key” column is not actually │ │ │ │ constrained to refer to any particular table; instead, │ │ │ │ in-application logic is used to determine which table is referenced.
│ │ │ │table_per_association.py - Illustrates a mixin which provides a generic association │ │ │ │ via a individually generated association tables for each parent class. │ │ │ │ The associated objects themselves are persisted in a single table │ │ │ │ shared among all parents.
│ │ │ │table_per_related.py - Illustrates a generic association which persists association │ │ │ │ -objects within individual tables, each one generated to persist │ │ │ │ -those objects on behalf of a particular parent class.
│ │ │ │ -discriminator_on_association.py - Illustrates a mixin which provides a generic association │ │ │ │ -using a single target table and a single association table, │ │ │ │ -referred to by all parent tables. The association table │ │ │ │ -contains a “discriminator” column which determines what type of │ │ │ │ -parent object associates to each particular row in the association │ │ │ │ -table.
│ │ │ │ -Illustrates the “materialized paths” pattern for hierarchical data using the │ │ │ │ SQLAlchemy ORM.
│ │ │ │ @@ -477,33 +477,33 @@ │ │ │ │See also
│ │ │ │ │ │ │ │Listing of files:
bulk_updates.py - This series of tests will illustrate different ways to UPDATE a large number │ │ │ │ -of rows in bulk (under construction! there’s just one test at the moment)
│ │ │ │ -bulk_inserts.py - This series of tests illustrates different ways to INSERT a large number │ │ │ │ -of rows in bulk.
│ │ │ │ -__main__.py - Allows the examples/performance package to be run as a script.
│ │ │ │ -large_resultsets.py - In this series of tests, we are looking at time to load a large number │ │ │ │ -of very small and simple rows.
│ │ │ │ -single_inserts.py - In this series of tests, we’re looking at a method that inserts a row │ │ │ │ within a distinct transaction, and afterwards returns to essentially a │ │ │ │ “closed” state. This would be analogous to an API call that starts up │ │ │ │ a database connection, inserts the row, commits and closes.
│ │ │ │short_selects.py - This series of tests illustrates different ways to SELECT a single │ │ │ │ record by primary key
│ │ │ │__main__.py - Allows the examples/performance package to be run as a script.
│ │ │ │ +bulk_inserts.py - This series of tests illustrates different ways to INSERT a large number │ │ │ │ +of rows in bulk.
│ │ │ │ +large_resultsets.py - In this series of tests, we are looking at time to load a large number │ │ │ │ +of very small and simple rows.
│ │ │ │ +bulk_updates.py - This series of tests will illustrate different ways to UPDATE a large number │ │ │ │ +of rows in bulk (under construction! there’s just one test at the moment)
│ │ │ │ +This is the default form of run:
│ │ │ │$ python -m examples.performance single_inserts
│ │ │ │ @@ -751,23 +751,23 @@
│ │ │ │ Several examples that illustrate the technique of intercepting changes
│ │ │ │ that would be first interpreted as an UPDATE on a row, and instead turning
│ │ │ │ it into an INSERT of a new row, leaving the previous row intact as
│ │ │ │ a historical version.
│ │ │ │ Compare to the Versioning with a History Table example which writes a
│ │ │ │ history row to a separate history table.
│ │ │ │ Listing of files:
versioned_rows_w_versionid.py - Illustrates a method to intercept changes on objects, turning │ │ │ │ -an UPDATE statement on a single row into an INSERT statement, so that a new │ │ │ │ -row is inserted with the new data, keeping the old row intact.
│ │ │ │ -versioned_update_old_row.py - Illustrates the same UPDATE into INSERT technique of versioned_rows.py
,
│ │ │ │ but also emits an UPDATE on the old row to affect a change in timestamp.
│ │ │ │ Also includes a SessionEvents.do_orm_execute()
hook to limit queries
│ │ │ │ to only the most recent version.
versioned_rows_w_versionid.py - Illustrates a method to intercept changes on objects, turning │ │ │ │ +an UPDATE statement on a single row into an INSERT statement, so that a new │ │ │ │ +row is inserted with the new data, keeping the old row intact.
│ │ │ │ +versioned_map.py - A variant of the versioned_rows example built around the │ │ │ │ concept of a “vertical table” structure, like those illustrated in │ │ │ │ Vertical Attribute Mapping examples.
│ │ │ │versioned_rows.py - Illustrates a method to intercept changes on objects, turning │ │ │ │ an UPDATE statement on a single row into an INSERT statement, so that a new │ │ │ │ row is inserted with the new data, keeping the old row intact.
│ │ │ │ @@ -815,42 +815,42 @@ │ │ │ │Working examples of single-table, joined-table, and concrete-table │ │ │ │ inheritance as described in Mapping Class Inheritance Hierarchies.
│ │ │ │Listing of files:
concrete.py - Concrete-table (table-per-class) inheritance example.
│ │ │ │ +joined.py - Joined-table (table-per-subclass) inheritance example.
│ │ │ │single.py - Single-table (table-per-hierarchy) inheritance example.
│ │ │ │concrete.py - Concrete-table (table-per-class) inheritance example.
│ │ │ │ -Examples illustrating modifications to SQLAlchemy’s attribute management │ │ │ │ system.
│ │ │ │Listing of files:
listen_for_events.py - Illustrates how to attach events to all instrumented attributes │ │ │ │ and listen for change events.
│ │ │ │custom_management.py - Illustrates customized class instrumentation, using
│ │ │ │ -the sqlalchemy.ext.instrumentation
extension package.
active_column_defaults.py - Illustrates use of the AttributeEvents.init_scalar()
│ │ │ │ event, in conjunction with Core column defaults to provide
│ │ │ │ ORM objects that automatically produce the default value
│ │ │ │ when an un-set attribute is accessed.
custom_management.py - Illustrates customized class instrumentation, using
│ │ │ │ +the sqlalchemy.ext.instrumentation
extension package.
A basic example of using the SQLAlchemy Sharding API. │ │ │ │ Sharding refers to horizontally scaling data across multiple │ │ │ │ @@ -879,24 +879,24 @@ │ │ │ │
The construction of generic sharding routines is an ambitious approach │ │ │ │ to the issue of organizing instances among multiple databases. For a │ │ │ │ more plain-spoken alternative, the “distinct entity” approach │ │ │ │ is a simple method of assigning objects to different tables (and potentially │ │ │ │ database nodes) in an explicit way - described on the wiki at │ │ │ │ EntityName.
│ │ │ │Listing of files:
separate_schema_translates.py - Illustrates sharding using a single database with multiple schemas, │ │ │ │ +where a different “schema_translates_map” can be used for each shard.
│ │ │ │ +separate_tables.py - Illustrates sharding using a single SQLite database, that will however │ │ │ │ have multiple tables using a naming convention.
│ │ │ │asyncio.py - Illustrates sharding API used with asyncio.
│ │ │ │separate_databases.py - Illustrates sharding using distinct SQLite databases.
│ │ │ │separate_schema_translates.py - Illustrates sharding using a single database with multiple schemas, │ │ │ │ -where a different “schema_translates_map” can be used for each shard.
│ │ │ │ -