Resolved issues in Content Delivery

This topic describes previously documented known issues in Content Delivery that are now resolved.

High publishing volume could lead to blocking sessions
Intermittent blocking sessions were found on the Broker database when there was high publishing volume. After DBAs killed the sessions, new ones turned up and caused publishing failures. Restarts of services were required to work around this issue.
The issue is now resolved.
Upgrade failed with Liquibase error
A customer encountered the following error while upgrading the databases: Unexpected error running Liquibase: An explicit DROP INDEX is not allowed on index 'ITEMS.PK_ITEMS'.
The issue is now resolved.
Under certain circumstances, the Content Deployer did not clean up working directories
Working directories containing unzipped transport packages would fail to be deleted under certain circumstances. The problem would get worse over time, as the Content Deployer would spend more and more time trying to delete the growing number of directories.
The issue is now resolved, and you can schedule a periodic cleanup of such directories through a cron job, which you can configure in application.properties.
Unpublishing would not always correctly remove references
An unpublish action of an item would not always correctly process references to the unpublished item.
After an upgrade, publishing an entire publication only worked if you first published Pages
The following error would occur if, after upgrading, you would attempt to publish a publication:
com.microsoft.sqlserver.jdbc.SQLServerException: Violation of PRIMARY KEY constraint 'PK_PAGE_COMPONENT_LINK'. 
Cannot insert duplicate key in object 'dbo.PAGE_COMPONENT_LINK'.
This issue is now resolved.
You couldn't have two metadata fields with the same name but different type
If two different Schemas contained fields with the same name but a different type (say, one was a single-value field while the other was a multivalue field), then, after publishing a Component based on one of these Schemas, publishing a Component based on the other Schema would fail. This was because OpenSearch (formerly Elasticsearch) couldn't tell them apart; it needed a type.
The issue is resolved by including data type with each field. Also the search query now includes a type field to retrieve named fields of a specific type.
If you changed the type of a Schema field, you had to unpublish any items using that Schema
If, after publishing items based on a Schema, you then changed the field type of a field in that Schema, your next publish action would result in a failure to deploy, with the following type of error found in the logs:
2022-05-10 08:11:26,919 ERROR [http-nio2-8097-exec-1] DocumentController - Couldn't perform create/update request.
com.sdl.delivery.iq.index.api.provider.IndexProviderException: Data is already published for the NewEmleDateField+NewdateField field with a different type
The issue is resolved by including data type with each field. Also the search query now includes a type field to retrieve named fields of a specific type. For existing content, a migration script is provided.