GeneralLedger.Details
Purpose Details Use Case Owner: Soon TanTypical Use Case: General LedgerStyle: Details Source Key Parameters CompanyLedger_ID (Created)Account_IDPRUPROJECT_ID Sample Exec Response SEQ Dates Codes SQL
Purpose Details Use Case Owner: Soon TanTypical Use Case: General LedgerStyle: Details Source Key Parameters CompanyLedger_ID (Created)Account_IDPRUPROJECT_ID Sample Exec Response SEQ Dates Codes SQL
Purpose Provides Management Account Structure Use Case Owner: Soon TanTypical Use Case: Financial Account StructureStyle: Detail Source Key Parameters CompanyAccount Code Sample Exec Response Level 1 Heading Grouping 1 Grouping 2 Leaf Account SQL
Role Assignment Role Permissions Bulk Credentials
One of the first lines of defense in determining the causes of database slowdowns is to use sp_who2. sp_who2 shows all the sessions that are currently established in the database. These are denoted as SPID‘s, or Server process Id’s. Running sp_who2 is easy, all that is required is to type sp_who2 and execute it, however it’s … Read more
Aim to completely transform the data infrastructure of the organisation, making it agile, scalable, and highly efficient. This could involve implementing cutting-edge technologies like distributed computing, real-time processing, and machine learning pipelines.
Set a goal to achieve zero downtime for data systems. This would involve building robust redundancy, fail-over mechanisms, and automated recovery processes to ensure continuous data availability.
Strive to make data accessible to everyone in the organisation, regardless of technical expertise. Implement self-service analytics tools, intuitive dashboards, and comprehensive data documentation to empower all stakeholders to make data-driven decisions.
Work towards implementing AI and machine learning algorithms to optimise data operations. This could involve automating data quality checks, anomaly detection, and predictive maintenance for data infrastructure.
Set a goal to build a data processing pipeline that can effortlessly handle massive volumes of data without any performance degradation. This could involve leveraging technologies like Apache Spark, Apache Flink, or Google Dataflow to achieve seamless scalability.
Aim to enable real-time analytics capabilities for the organisation, allowing stakeholders to make decisions based on the most up-to-date information. This could involve building streaming data pipelines, implementing complex event processing systems, and developing real-time dashboards.