OnyxOS Updated to Release 5.5
OEM Team
Data Ingestion Pipelines – General
- Preprocessing audit error logging change –
a. Added pipeline type condition for Formulary.
b. Added count to count unique row count. - Commonfunctions – Write Delta function added, this function is used to write delta files in preprocess location.
- GlobalVariables.py - reading Cloud watch channel from common functions library.
- Reporting Dashboard changes –
a. Added a dropdown list of Report type and IG type. - Added empty file function to check if csv files exist in source location, if not then upload source files with just headers in source location.
- Cloud watch and IAM role – removed usage of AWS secrets and replaced it with IAM role.
Data Ingestion Pipelines – Claims
- Added Cloudwatch library from common function repo
- Added default file generator
- Parameterized FHIR converter repartitioning
- Parameterized cloudwatch occurrence in all the files.
- Removed Coalesce when writing the file.
- Reading source files using a read function instead of reading a parquet file.
- Added empty file generator generates empty files in sourcefile location.
- Added Spark Catalog instead of default metadata.
- Included Unity Catalog changes.
- Sample files and Data Guide matching.
Data Ingestion Pipelines – Clinical
- Added Cloudwatch library from common function repo
- Added default file generator
- Parameterized FHIR converter repartitioning
- Parameterized cloudwatch occurrence in all the files.
- Removed Coalesce when writing the file.
- Reading source files using a read function instead of reading a parquet file.
- Added Spark Catalog instead of default metadata.
- Removed Path check as none of the profiles are interdependent.
- Included 3 more profiles to the workflow.
- Included Unity Catalog changes.
- Sample files and Data Guide matching.
Data Ingestion Pipelines – Provider Directory
- Added Cloudwatch library from common functions.
- Added default file generator.
- Parameterized FHIR converter repartitioning Copyrights Protected by Onyx 2023-2024
- Parameterized cloudwatch occurrence in all the files.
- Removed Coalesce when writing the file.
- Reading source files using a read function instead of reading a parquet file.
- Added Spark Catalog instead of default metadata.
- Removed Path check as none of the profiles are interdependent.
- Removed PVD common functions library
- Consolidated common functions into the common-wheel-package.
- Included Unity Catalog changes.
- Sample files and Data Guide matching.
- File name changes in setup file
Client Apps Team
FITE:
- Optum configuration for Quartz UAT and PROD environment
- Capability statement update for ABACUS to fix FITE Cards issue
- US Core/Clinical capability statement update for CHGSD UAT to fix the reported resource count issue
- Missing Search parameter issue fix in US Core/Clinical for CHGSD
SLAP V2:
- Code updates in member match logic to fix IDAHO issue with member login
SLAP V3:
- Code fixes for ABACUS based on issues reported post testing in ABACUS environment.
Reporting Dashboard:
- Monthly Log reports for API access deployed to IDAHO UAT and PROD
PAA Team
Claims: CHGSD(V4.2)
-
Issue: In the Main EOB_Inpatient notebook during the transformation Query of Claims_item_modifier_stack the Business Identifier variable should be the Capital Letters but small letters being present because of that Inpatient is failing.
Resolution: We have modified the variable and placed the Capital Letter of business Identifier.
-
Issue: During the Conditional Init Script one of the folders under this path /FileStore/tables/Claims /SAFHIR/AppLog/Retry is not getting created hence resulting in failure during the ingestion.
Resolution: As the index present inside [5:] causing the issue while creating the folders. As it expects complete path. So, we have removed the [5:] to prevent causing the issue. Currently this change is deployed to prod and working as expected.
Clinical: Quartz (V4.3.2)
-
Issue: While developing the Pyspark Version of Clinical Code. The Client sent us the wrong file naming convention. So, we have developed the code based on the File naming provided. When it moved to UAT. Client sends the same File naming convention that provided in Internal. So, we have deployed that to Production but in Production we are receiving a different naming convention for 2 of the profiles Immunization and Laboratory Result.
Resolution: We have modified the code in CopySourceFiles.py to handle all type of Cases.
[]: