OnyxOS 5.2 Early Fall 2023 Edition - Aug 09,2023

Back to Release Notes Summary

OnyxOS Updated to Release 5.2

MMHD

MMHD API changes:

In this update, we have introduced crucial modifications to the MMHD API, enhancing its functionality and compatibility with various payers. These changes aim to address the issue of inconsistency in payer practices when fetching patient data and generating OAuth tokens using different grant types.

Key Changes:

  1. Modified Data Tables: We have made significant updates to the data tables within the MMHD system. Specifically, we have added fields to store both the “Patient_Id” reference and the “Oauth_Grant_type” associated with each payer. This alteration enables a standardized approach to handling patient data retrieval and OAuth token generation.

  2. Enhanced Patient Data Retrieval: The API has been reworked to facilitate the retrieval of patient data using the “Patient_Id” reference stored in the database. This streamlines the process of acquiring patient-related information while ensuring consistency across different payers.

  3. Payer FHIR URL Analysis: As part of our efforts to improve interoperability, we have rigorously tested the FHIR URLs of each payer. By doing so, we have compiled a comprehensive list of payers and the specific parameters they support. This step was essential to meet MMHD requirements, which necessitate parameters like “_total,” “_count,” and “_lastUpdated” in the payer’s FHIR URLs.

  4. Parameter Support Information: Our data tables have been further enhanced to incorporate information regarding parameter support for each payer. This modification empowers the API to seamlessly manage parameter-based interactions with different payers.

  5. Parameter Handling in Progress: Ongoing work involves coding changes to effectively handle parameter support for various payers. We are dedicated to ensuring a smooth and unified experience across all interactions.

  6. Expanded Scope Handling: Notably, we have broadened the API’s capabilities to encompass the scope “patient/*.read.” This enhancement facilitates the retrieval of comprehensive clinical data for patients. In addition, we are actively developing modifications to accommodate other non-generic scopes, with progress underway.

MMHD Oauth server changes:

  1. Oauth server has been modified to take different authorization grant types according to payer’s requirement.

  2. Grant type of every payer will be stored in database after verifying it via postman. This should be added as a part of the payer onboarding process itself and it is manually added by us in the database table.

  3. All the required auth parameters are added as a part of authentication URL in the database itself.

MMHD Payer onboarding via Developer Portal:

  • Collected all the payers and plans list which are Patient Access API complaint from CMS Compliance Tracker.

  • Visited Developer Portals of the payers who are PAA compliant according to Patient Access API compliant report that is collected.

  • Used Anu’s credentials (anu.shrestha@onyxhealth.io) for the already registered payers.

  • Registered new accounts under appadmin@onyxhealth.io for new payers.

  • Registering our Third-Party application MoveMyHealthData with payers includes:

    • Directly registering app in the developer portal by filling necessary details. Client ID and Client Secret will be assigned.
    • Registration is initiated through Form filling. Client ID and Client Secret will be sent to registered Email.
    • Registration is initiated through mail by sending necessary details. Client ID and Client Secret will be sent to registered Email.
    • Questionnaires need to be answered for the app approval.
  • Contacted most of the payers who are PAA compliant and successfully registered our app. There are still some payers who haven’t replied.

  • Registered app with payers who are not accessible outside US region using Jump Box and with the help of onsite folks

  • In total, we have successfully registered MMHD with 49 payers which consists of 366 health plans.
Sl No Client name Health Plans
1 Aetna 42
2 Cigna 18
3 United HealthCare 78
4 Promedica Health System 3
5 Blue Cross Blue Shield of North Carolina 3
6 Allina Health and Aetna Insurance Holding Company 1
7 Mercy Care 1
8 Innovation Health Holdings, LLC 1
9 Anthem Inc 43
10 Blue Cross & Blue Shield of Rhode Island 2
11 iCare 1
12 Highmark New York 2
13 CMS 1
14 Blue Cross and Blue Shield of Kansas City 2
15 Capital BlueCross 3
16 Aware Integrated, Inc. 3
17 Sharp Healthcare 1
18 Independence Health Group, Inc. 6
19 ATRIO Health Plans 4
20 Humana Inc 47
21 Bright Health Group, Inc. 14
22 Blue Cross Blue Shield of Arizona 4
23 Health Care Service Corporation 12
24 Blue Cross Blue Shield of Kansas 1
25 Henry Ford Health System 4
26 Premera 2
27 AllCare Health, Inc. 2
28 Riverspring Health Holding Corp. 1
29 Kaiser Foundation Health Plan, Inc. 7
30 BlueCross BlueShield of Alabama 1
31 Triple-S Management Corporation 2
32 Cambia Health Solutions, Inc. 9
33 St Francis Health System & St John Health System 2
34 UPMC Health System 5
35 Guidewell Mutual Holding Corporation 4
36 Moda Partners, Inc. 2
37 AIDS Healthcare Foundation 3
38 Highmark Health 6
39 CareFirst, Inc. 3
40 New York City Health and Hospitals Corporation 1
41 Central Mass Health Holding LLC 2
42 SCAN Group 5
43 AlohaCare 1
44 Clover Health Holdings, Inc. 2
45 BlueCross BlueShield of Tennessee 4
46 Community Health Plan of Washington 1
47 Athena Healthcare Holdings, LLC 1
48 Baylor Scott & White Holdings 3
  Total 366

Reporting Dashboard

Reporting Dashboard API and Timer Trigger Changes

What’s New -

  • Integrate with SLAP: To include the Login report which are successful login count, unsuccessful login count, total user transactions and total unique users.
  • Integrate with AWS: Create an Api to display only required fields for AWS Reporting Dashboard Integration
  • Support multiple clients – Modifying the Api’s to include separate tables of task details for multiple clients in single database based on the client’s name.
  • Modified Timer Trigger – To Delete records older than 90 days from multiple client tables.
  • Migration - from a Development environment to Production environment.

Bug Fixes –

  • Modifying Batch processing Logic: As batch processing which we were using before was not returning the expected date time there was difference in the total transactions count so resolved it.
  • Allow null values for provider URL: As some of the clients don’t have a provider URL, modified it to allow null values if the provider URL isn’t provided by the client.

SLAP V3:

Tech Stack

  1. Python (3.10)
  2. Flask (for slap)
  3. Fast Api (for member match)
  4. Postgres DB

Components

  1. Slap (Smart Launch Auth Proxy)
  2. Member Match

SLAP (Smart Launch Auth Proxy)

Slap is proxy for authenticating against for multiple IDPs using a common interface. It supports two types of auth flows

  1. Oauth: Each app registered in slap will have information regarding its IDP of choice, using this information the slap requests for a token through oauth2 flow on successful login, the token is returned.
  2. SAML: Each app registered in slap will have information regarding its IDP of choice, using this information the slap requests for a token through SAML flow on successful login, a SAML response is returned.

Member Match

The member match operation allows one health plan to retrieve a unique identifier for a member from another health plan using a member’s demographic and coverage information. This identifier can then be used to perform subsequent queries and operations.

It consists of two 2 API endpoints:

  • /api/member-match (Member match using patient demographic details)
  • /api/slap-v3/member-match (Member match using patient’s member Id in the current system)
  1. Member Match using patient demographic details.

    1. Request body (Member Match request example)

      The request body consists of 4 parameters:

      i. MemberPatient: US Core Patient containing member demographics.

      ii. CoverageToMatch: details of prior health plan coverage provided by the member, typically from their health plan coverage card.

      iii. CoverageToLink: details of new or prospective health plan coverage, provided by the health plan based upon the member’s enrolment.

      iv. Consent: Consent held by the system seeking the match that grants permission to access the patient information information on the system for whom a patient is sought.

    2. Response Body

      The of the API depends on the following scenarios:

      i. If the received consent is invalid: 422 status code (Un-processable Entity)

      ii. If the auth token is invalid: 422 status code (Un-processable Entity)

      iii. If the server is unable to find the patient: 422 status code (Un-processable Entity)

      iv. If the server is able to find the patient (Member Match Response Example): 200 status code (OK)

  2. Member Match using patient member id.

    1. Request body

      The request body consists of 1 parameter:

      i. Member_id: Id to uniquely identify the patient.

    2. Response Body

      The of the API depends on the following scenarios:

      i. If the auth token is invalid: 422 status code (Un-processable Entity)

      ii. If the server is unable to find the patient: 422 status code (Un-processable Entity)

      iii. If the server is able to find the patient: 200 status code (OK) and returns patient’s fhir id.

Health Data Clarity (HDC):

Addressing Previous Issues:

The primary concern that users faced before this update was the extended response time of the Clarity APIs. This issue resulted in delays when users attempted to access data, ultimately impacting the overall usability of the app. With the utilization of WebSocket technology and other optimizations, we have successfully overcome this challenge, ensuring that API responses are now rapid and efficient.

Implementations for this release:

We are excited to introduce a significant enhancement to the Clarity API, aimed at elevating user experience by optimizing data loading within the Clarity application. In this update, we have implemented WebSocket APIs, revolutionizing the way data is transmitted and received.

Key Feature:

  1. WebSocket Data Transmission: We have taken a proactive approach to address data loading challenges within the Clarity application. With this release, we proudly introduce WebSocket APIs, a cutting-edge solution designed to transform the way data is delivered to the app. By utilizing WebSocket technology, we enable the transmission of data in efficient and manageable chunks.

    Advantages:

    • Enhanced Loading Times: The WebSocket APIs drastically reduce the time it takes to load data in the application. By breaking down the information into smaller, manageable portions, we ensure that users can access the desired content swiftly and effortlessly.
    • Improved User Experience: Faster data loading translates directly into an improved user experience. Users will appreciate the seamless and responsive nature of the Clarity app as they interact with data that is readily available.
    • Optimized Performance: The implementation of WebSocket APIs contributes to the overall performance optimization of the application. By streamlining data delivery, we reduce the strain on resources, ensuring a smoother experience even during peak usage.

We have compiled a series of screenshots displaying the UI and API integration, showcasing the changes made to enhance functionality and user experience.

  1. Implemented a progress bar to provide real-time data loading updates for users/members, while also disabling cards until all data related to a member is fetched.

  2. Added a resource count feature, providing users/members with a summary of the total records associated with their account.

  3. Implemented the progress bar on the individual resource page, displaying the number of fetched records. Disabled the sort button until all records are loaded to ensure the sorting option is available only after data retrieval completion. Additionally, added the lastUpdate as a default date instead of showing “Date Not found”.

Clarity App future enhancements:

  • Overcoming Limitations with Websockets for Larger Records: Currently, we are using Websockets for real-time updates, but we have identified certain limitations when dealing with larger records. By exploring alternative technologies or optimizing the WebSocket implementation, we aim to ensure efficient and reliable real-time data updates even for extensive datasets.
  • Enhancing User Experience for the Timeline View: We plan to improve the user experience for the timeline view by optimizing the loading speed and responsiveness. This includes implementing asynchronous data fetching, lazy loading, and caching techniques to ensure smoother navigation and interaction with the timeline. Additionally, we aim to provide more intuitive and user-friendly controls for filtering, searching, and organizing timeline events.

OnyxOS Provider Directory (Plannet) 4.1:

1. PREPROC CHANGES

  1. Reading the JSON file and saving the data into the CSV File after Preprocessing.
  2. Made changes to DBFS Rename based on the File names different from each client.
  3. Added the Cross Walking Reference table mappings which will get the FHIR ID from DB, if the record is present. Notebook Updated: All the Preprocessing Notebook
  4. Used the Explode Command to split the Array Elements from Single row to Multiple Rows. In Order to Accumulate to a CSV File. Notebook Updated: Location, Organization, Practitioner

2. MAIN CHANGES

Template

  1. Made changes to all the profile templates to accumulate all the new changes
  2. Added new columns name to the ProfileTemplateConfigStr Variable to display those Columns in FHIR JSON. Notebook Updated: All Templates
  3. Made changes to the Column names present in the Template for the reference section to show required data on the section. Notebook Updated: All Templates

FHIR Convertor

  1. Added a Condition to check the file contains data or not before generating the templates. Notebook Updated: FHIR_Convertor_Functions.py
  2. Reading the Client_Type from the ini file to pass it through the Audit logging code.
  3. Imported the Required Packages to support the Logging the count for Common Audit Table.
  4. Added the Code which will log the Count after merging and removing duplicates to the CommonAudit Table.

COPYSFTPTODBFS

  1. Reading the Client_Type from the config.ini to match the archiving path.
  2. Added a condition to differentiate the archival folder structure in SFTP.
  3. Added a condition to grab the json files from SFTP.

Preproc_FP_FCWrapper

  1. Newly added wrapper notebook which will trigger the both preproc and the FHIR Convertor Notebooks.

FhirPrepTransformations

  1. Added the config parameter to the read config method to fetch the values from the config file.
  2. Modified the FhirPrep Trans Query added condition to filter based on the Status and the Version greater than 0.

DATABASE

Using the same SQL server for all the consolidated clients with new individual client specific database setup as below - metadata_v1_ for client

FHIR PAAS

Using the same fhir pass for ingestion purposes for all consolidated clients for one adb instance we are using one common fhir pass.

JOB WORKFLOW

  1. Job Workflow Code has been updated now removing all parallel task to Serial task to support the cross-walking references.
  2. Changed the NotebookPath Key to Respective Profile Value from the ini file to call the New Wrapper Notebook.
  3. Changed the alignment of the Profile based on the dependency. This is Order of the Workflow Organization, Location, Practitioner, Network, HealthcareService, InsurancePlan, OrganizationAffiliation, PractitionerRole.

CONFIG

  1. Introducing parameterization for key – value pairs of config.ini file to handle the data file location which is different for internal and client environments.
    1. data_file_path__pvd
    2. db_name_
    3. pvd_main_notebook_path
    4. pvd_preproc_notebook_path_
  2. key- value pair which got updated: updated the path regarding new folder changes (workspacae and dbfs) accordingly.

  3. Fhir pass consolidation. [FHIR-PAAS-CRED]
    Using the same fhir pass credentials for all the consolidated clients

DeleteScriptRecords

  1. Added the New File which will fetch the FHIR Records from FHIR Store and convert those resources to bundle of 400 resources and pass it to delete method.
  2. After that we truncated the DB as well.

Fix for identified Bugs which got logged by testing team for internal testing

  1. 17183-[PVD] - InsurancePlan: Any cross-references in FHIR JSON should be mapped based on reference tables. If a FHIR reference is not found, a relevant message to be shown.

    We have added the Cross Walk references im Preproc and made the required Template changes.

  2. 17184- [PVD] - Organization: The value of “Organization.type.coding.display” is not mapped in the FHIR JSON. However, the code ‘pvrgrp’ is present in the value system in FHIR, which means that the corresponding display value is also mapped.

    Display is not a mandatory field, and we haven’t implemented it for any PVD Clients.

  3. 17185- [PVD] - Network: The value of “Organization.type.coding.display” is not mapped in the FHIR JSON. However, the code ‘ntwk’ is present in the value system in FHIR, which means that the corresponding display value is also mapped. However, showing display values for element of type “coding” will be taken as an enhancement for upcoming releases. Tracked in ADO ticket Bug 17197: [For all clients] PVD-All Profiles: coding.display values need to be populated based on the code values. Though display values are optional, it is good to have them

  4. 17186- [PVD] - HealthcareService: The value of “HealthcareService.category.coding.display” is not mapped in the FHIR JSON. However, the code ‘prov’ is present in the value system in FHIR, which means that the corresponding display value is also mapped.

    However, showing display values for element of type “coding” will be taken as an enhancement for upcoming releases. Tracked in ADO ticket Bug 17197: [For all clients] PVD-All Profiles: coding.display values need to be populated based on the code values. Though display values are optional, it is good to have them

  5. 17188 - [PVD] - PractitionerRole: The columns “NewPatients_AcceptingPatients_Id” and “NewPatients_FromNetwork_Id” are not mapped in the FHIR JSONs.

    Made the required change in the Template Variables to map to the correct one.

  6. 17275- [PVD] - OrganizationAffiliation: In the audit log table for OrganizationAffiliation, only the before preprocess step is displayed, and there is no record of the after preprocess step.

    Made the change in preprocessing file which points out the wrong DF.

  7. 17438 - PVD-Network: DeleteScript Logic has no effect on Network Profile

    Added a Condition in the Delete Script if the Profile Type is Network. Then adding profile name to Organization and Profile sub type to Network to delete the network records from the FHIR PAAS.

OnyxOS US Clinical core (clinical) 4.3:

  1. Conditional-InitScript:

    Refining the initial folder structure setup, we’ve streamlined the process by eliminating the Additional-InitScript and Initscript notebooks. Instead, we’ve incorporated a more efficient approach using the Conditional-InitScript notebook. This notebook reads essential folder paths directly from the configuration ini file, enhancing the overall system’s organization and usability. we’ve introduced an expanded resource list, encompassing new elements, and incorporated client-specific folder names that are essential for seamless functionality.

    • Notebook Removed:

      i. Additional-InitScript

      ii. Initscript

    • Notebook Updated:

      i. Conditional-InitScript

  2. Audit and ErrorLog Reports:

    We’ve introduced a significant improvement by isolating the Generate_AuditErrorLog_Reports notebook, enabling the autonomous generation of reports. Additionally, we’ve implemented a novel approach to facilitate the uploading of report files to client-specific destinations. This enhancement involves the integration of new notebooks.

    • Notebook Updated:

      i. Generate_AuditErrorLog_Reports

      ii. ReportGeneration.

    • Notebooks Added:

      i. Move_AuditErrorLog_Reports

  3. Templates Updates: We’ve integrated a fresh template designed specifically for the CarePlan resource, enriching our system with enhanced capabilities. In a progressive move, we’ve augmented the existing templates by incorporating new properties, thereby elevating their capabilities. This strategic upgrade not only enriches the templates’ functionality but also ensures adaptability to evolving requirements. Moreover, we’ve ingeniously addressed the challenge of managing multiple values within array-type properties. Furthermore, we’ve diligently rectified and resolved minor bugs within the templates.

    • Templates Added:

      i. CarePlan

    • Templates Updated:

      i. Condition

      ii. Immunization

      iii. LaboratoryResult

      iv. VitalSigns

      v. SmokingStatus

  4. Enhancing JSON File Management: Our latest codebase enhancement introduces robust JSON file support in conjunction with the existing CSV file compatibility. This advancement enables our code to intelligently discern the source file format based on information stored in the config ini file, thereby orchestrating the appropriate operations and transformations with precision.

    Notebooks Updated:

    1. FhirPrep:

      Introducing a New Notebook for CarePlan Resource Integration and Integrated Advanced Code Updates for Seamless JSON File Handling, Including Reading and Writing Capabilities.

      • Notebook Added:

        I. CarePlan

      • Notebook Updated:

        I. CareTeam

        II. Condition

        III. Goal

        IV. Immunization

        V. SmokingStatus

        VI. LaboratoryResult

        VII. VitalSigns

        VIII. DiagnosticReport

        IX. Encounter

        X. MedicationRequest

        XI. Procedure

    2. FHIR_Templare_Converter:

      1. codebase has undergone a significant update, empowering it to seamlessly process JSON files as well. This dynamic capability is determined by the source file’s format, intelligently extracted from the config ini file.

      2. Implemented an additional audit log feature that comes into effect post-merging duplicate records for specific resources. This merging process, embedded within the FHIR converter, ensures alignment between the before-preprocessed count and the count during the FHIR conversion merge.

    3. SubmitMainJob: Updated the code snippet to seamlessly process CSV or JSON files based on the file format extracted from the config ini file.

    4. ArchiveFilesRemoveLogFiles: Revised the codebase to facilitate automatic archiving of landing files, contingent upon the source file format gleaned from the configuration ini file.

  5. Workflow: We’ve seamlessly integrated new tasks into our workflow to provide comprehensive support for the CarePlan resource and introduced a single new task within the workflow, dedicated to efficiently copying the source files to the Databricks File System (DBFS).

  • Tasks Added:

    • CopySourceFiles

    • PREPROC_CAREPLAN

    • CAREPLAN

  1. PreProc Updates:
    1. Copy Activity: In our effort to enhance operational efficiency, we’ve introduced a new notebook within the PreprocessRun. This consolidated notebook now serves as a centralized trigger, efficiently initiating all copy activity pipelines in a unified manner, eliminating the need for separate triggers.
    • Notebook Added:

      • CopySourceFiles

OnyxOS CARIN BB (Claims) V 4.2.1:

  1. Preproc fix

    1. moved to prod for v4.2.1

    2. configuration of new set of keys (public, private), and passphrase which get used for encryption decryption purposes.

    3. fix for copy activity regarding the recurs copy.

    4. Fix for EOB insurance reference - using the claims unique identifier instead of coverage id

  2. Config.ini fix

    1. fix for upload retry putting the right path in the config.ini to pick the 500 failed bundles for the retry logic.
  3. Main: fix

    1. Remove the additional-initscript-notebook and consolidated the logic with conditional-initscript notebook to handle both internal and production env initial folder creation.
  4. Internal Env clean up activity: new notebook got added with the logic for internal env clean up (cleaning up the archive files from the dbfs env) and a new scheduled jobworkflow which trigger on 1st and 15th of every month. (This is mainly for internal env)

OnyxOS CARIN BB (Claims), Provider Directory (Plannet) – AWS V 5.0 (Preprocessing):

Features Includes:

  • Data mapping using AWS provided data dictionary.
  • Audit and error logging implementations on AWS platform.
  • Added a condition to read parquet file as source file.
  • Changed pre-processed files to Parquet format.
  • Removed toPandas.
  • Removed YARN.
  • UDF for the Date columns, to read any kind of date format and ignore nulls if any.
  • Changes with respect to addition of IG in control logging, read_config and other functions because of consolidation.

Test Proof:

  1. Parquet format:
  2. Data mapping: Please find the attached Data Guide mapped using Data Dictionary https://newwavetechio-my.sharepoint.com//g/personal/shouray_kumra_onyxhealth_io/EVTVqCXnibFKhhRh9GJYZJcB_Vel9sK1K0AtifwEc_JO-A?e=z9pe7s
  3. Control logging:

OnyxOS CARIN BB (Claims) , Provider Directory (Plannet) – AWS V 5.0 (Main):

Features Includes:

  • Mount S3 to read/write files in conditional init script.
  • Archival of source, pre-processing and dotdone files to S3 bucket.
  • Retrieval of secrets using AWS Key Management Service and Secrets Manager.
  • Database connection related changes.
  • Parquet file read for main notebooks (Transformation queries).
  • Firely related changes (i.e Authorization part is still not completed as we don’t have client id and secrets for testing the firely server)
  • MongoDB is decided to be the Persistent layer (Database) with vonk server.

Test Proof:

  1. Mount S3:
  2. Retrieval of secrets:
  3. Database connection:
  4. Read parquet in main notebooks:
  5. MongoDB:

Consolidation (General Repo) Deployment V 1.0

Features Includes:

  1. commonfunctions.py

    a. read_configkey(section,key,ig) —-> Added an additional parameter ‘ig’, this parameter will specify which Implementation Guide it is. Reason of adding this is we have common functions across all the IGs therefore, to eliminate code redundancy.

    b. getSAFHIRInfoErrorDescription(section,key,ig) —-> Added an additional parameter ig.

    c. saveControlLogToDb() —> added ig

    d. sendEmailNotifications(), sendTeamsChannelNotification() –> ig

    e. sendTeamsChannelNotification() –> url = read_configkey(‘TEAMS-CHANNEL-URLS’,’CLAIMS_URL’, ig) replaced CLAIMS_URL to URL.

    f. Added writeParquetfile(ig, filename)

    g. Added parse_date_udf

    h. get_secret(secret_name,ig)

  2. SAFHIR_Log_Actions.py

    a. getCurrentLogger(self,ig,name,filename) —> Added an additional parameter ig.

  3. preprocessingFunctions.py

    a. getTableAsDF() —> Added an additional parameter ig.

    b. Removed performDataRowHashing() and hashedBulkDBUpsert()

  4. preprocAuditErrorLogging.py

    a. generateErrorLogs() –> imported length

    b. saveAuditLogsToDBTable(), saveErrorLogsToDBTable(), DeleteDBTableAuditandErrorLogs() —> added parameter ig.

  5. DBConnection.py

    a. connectSQLDbUsingPyOdbc()–> added parameter ig

  6. CopySourceFiles.py

    a. added Renaming of Source Files -> read_configkey, currentLogger changes

  7. SubmitPreprocJob.py

    a. added extra parameter ig in sendTeamsChannelNotification(),saveControlLogToDb(), read_configkey, currentLogger

  8. SourceFileCheck.py

    a. read_configkey, currentLogger

Config ini changes

  • Added access keys for retrieving secrets from AWS Key management service and Secrets Manager.
  • Changed the paths from DBFS location to S3 bucket locations.
  • Removed Azure key vault related parameters.
  • Removed ADLS copy related parameters.
    1. Access keys:
    2. Changed paths:

Changes and Updates to Code Base with the introduction Delta Tables

GENERAL REPO:

  1. PreprocAuditErrorLogging.py -

    1. saveErrorLogsToDBTable ()

      • Removed Pyodbc Connection String

      • Added WriteTo to insert(append) error Dataframe to Error Log Table

    2. SaveAuditLogsToDBTable()

      • Removed Pyodbc Connection String

      • Added WriteTo to insert(append) audit Dataframe to AuditLog Table

    3. DeleteDBTableAuditandErrorLogs()

      • Replaced with Python Method instead of STORED PROC

      • Deleted Logs Older than given duration using Spark Sql Query

  2. commonfunctions.py
    1. SaveControlLogToDb()
      • Replaced MS SQL query with Spark SQL Query.
      • Condition for Preprocess and main ingestion
  3. DBConnection.py
    1. ConnectSQLDbUsingPyOdbc()
      • Removed - as Pyodbc Connection is Obsolete
  4. preprocessingFunctions.py
    1. GetTableDataAsDF()
      • Removed - as Pyodbc Connection is Obsolete
  5. Upsert_Functions.py
    1. load_data_into_sql_table()
      • Removed Creation of Temp Table
      • Pyspark Delta Table Upsert MERGE on Source Ref Table with Dataframe created using dot done

PREPROCESS REPO

  1. All Preprocess Notebooks
    • Removed Imports from Obsolete functions
  2. Patient
    • Corrected Patient name column header
  3. Claims
    • Corrected claim prescribing physician npi column header
  4. ClaimsDiagnosis
    • Added line number increment logic
  5. Reference.py
    • Added consolidated library reference in Reference.py
  6. SubmitPreprocJob.py
    • Replaced ISO Date Formats instead of default as a parameter to Control Log
    • Replaced None with ‘’ (Empty String)
    • Added default value for Record Counts in INT datatype

MAIN REPO

  1. Additional-InitScript.py • Removed Imports from Obsolete functions
  2. Conditional-InitScript-refLoad.py • Removed Imports from Obsolete functions • Removed Creation of Temp reference tables (as now Delta Tables are actual Ref Tables)
  3. All Main Notebooks • Replaced names of old reference table names with Specific Delta Reference Tables
  4. DbWarmUp.py • Removed - as Pyodbc Connection is Obsolete
  5. fhirUtils.GetRefTables.py
    • Removed - as Pyodbc Connection is Obsolete
  6. IDMLoad.py
    • Removed Pyodbc Connection as it is Obsolete
    • Removed Imports from Obsolete functions
    • Removed user defined SparkConfig sparksession
    • Added Spark SQL Query for IDM load instead of the Previous STORED PROC approach
  7. UploadIndividualRecords.py
    • Removed Pyodbc Connection as it is Obsolete
    • Removed Imports from Obsolete functions
    • Added WriteTo to insert(append) error Dataframe to Error Log Table
  8. SubmitMainJob.py
    • Replaced ISO Date Formats instead of default as a parameter to Control Log
    • Replaced None with ‘’ (Empty String)
    • Defaulted value for Record Counts in INT datatype
  9. JobSummaryDetails.py
    • Get Total, Success, Failed and Logged Resource Counts from Job Summary, Audit and Error Logs and make a Join to Report on Dashboard for given RunDate

Back to Release Notes Summary

[]: