Digital Transformation - Industrialized Data Delivery, is ...
The Cool feature is the ability to transform from one vendor RDBMS format into another automatically
Common Uses
- A sophisticated, metadata registry driven architecture
- Allows classification of the registry elements
- Moves data across enterprise in a secured fashion.
- Replicates data across enterprise
The Cool feature is the ability to transform from one vendor RDBMS format into another automatically
Common Uses
- Support Data Projects - for Operational and Analytical purposes
- Data Virtualization
Cloud connectors include - Google Cloud, Amazon AWS, Microsoft Azure
The Meta Data Registry is automatically populated as part of onboarding a source application. The Classification is performed by the business user in excel or directly on the screen. The Data Catalog is presented to any user who is requesting for the available Data Set. Upon Request, a notification is sent to the owner of the Data. On approval, the system starts pushing notifications for first time initialization and for on-going there on.
The Classification is manually performed via screen or loader.
Upon onboarding Source applications, the Ingestion framework, ensures the Change Data Capture is performed to grab the latest changes to the data sets. The Ingested Data is stored in a Hadoop storage and a notification is sent to the subscribers.
Upon receiving notifications the Data Loader, perform the following
Reporting capability
The Meta Data Registry is automatically populated as part of onboarding a source application. The Classification is performed by the business user in excel or directly on the screen. The Data Catalog is presented to any user who is requesting for the available Data Set. Upon Request, a notification is sent to the owner of the Data. On approval, the system starts pushing notifications for first time initialization and for on-going there on.
The Classification is manually performed via screen or loader.
- Private
- Public
- Secret
- Confidential
Upon onboarding Source applications, the Ingestion framework, ensures the Change Data Capture is performed to grab the latest changes to the data sets. The Ingested Data is stored in a Hadoop storage and a notification is sent to the subscribers.
Upon receiving notifications the Data Loader, perform the following
- Streams the data from the Hadoop storage
- Translates the data type from the source to the target vendor databases
- Writes to a centralized log
- Uses speed write techniques to stream the data in parallel threaded envrionment
- Ensure multiple notifications are processed at the same time
- Provides prioritization of the threads
- Keeps track of the files to prevent redundant processing
- Logs the information for monitoring
Reporting capability
- Monitors processing of files across the enterprise
- Identifies failures and provides restart mechanism
- Provides dashboard on health of subscription projects
- Provides Users with metrics on usage