dynamics 365 fine tuning tips that skyrocket user productivity

Dynamics 365 Fine-Tuning Tips That Skyrocket User Productivity

Visualize Dynamics 365 opening on Monday morning and all the forms loading in less than a second. Views render instantly. Workflows fire without delay. Your sales and service teams zip through work with no waiting for slow pages or dropped workflows. That’s the kind of leverage that you can realize with Dynamics 365 fine-tuning done just right.

In this blog, you will learn how to improve productivity with Dynamics.

We’ll delve into such aspects as form design, data model design, and the Power Platform performance tuning.

You’ll discover how to spot bottlenecks, implement best practices, and monitor performance over time.

By the time we’re done, you’ll have a template to turn Dynamics 365 into a high velocity business engine – and when to not hesitate to contact DAX Software Solution for assistance via the dax software home page.

Looking for support with Dynamics 365?

With 20+ years of industry experience in ERP and CRM, DAX is proficient in crafting tailored solutions to meet the needs of businesses.

1. Establish a Performance Baseline

Before you change any settings, you have to have some good data to begin with on how Dynamics 365 is doing as-is. A clear baseline helps you:

  • Spot the most severe bottlenecks
  • Quantify the return of each tuning action
  • Focus on high-value optimisations

Here is how you can establish your baseline:

A. Define Key Scenarios and Entities

 1. Pinpoint the critical business flows

  • The Sales team is opening an Account record and its associated Opportunity details.
  • Customer service reps loading their Case forms and history
  • Bulk Order updates by back-office staff

 2. Define the metrics to capture

  • Form load time (in seconds)
  • Duration of workflow execution (milliseconds)
  • Plugin Execute Time (milliseconds)
  • Length of Asynchronous job queue (number of jobs pending)

B. Use Dynamics 365 Performance Center

1. Enable Performance Center

  • Sign in to the Power Platform admin portal, and go to Environments > [Your Env] > Settings > Product > Performance
  • Enable Client Performance Monitoring

2. Run user scenarios

  • Perform each major step (clicking Account, saving a Case, etc.) from a representative user’s browser at least 10 times, distributed across peak and off-peak hours.
  • The Performance Center logs form load times and client-side script times automatically.

3. Export the results

  • Download the generated CSV. It will have columns such as FormName, LoadTimeMs, ScriptTimeMs, and Timestamp.

C. Application Insights Instrumentation

1. Set the Instrumentation Key

  • Create or select an Application Insights resource in the Azure Portal
  • In D365, navigate to Settings > Administration > System Settings > Customization, and enter the key under Telemetry

2. Capture Server-Side Metrics

  • Perform standard processes (for example, an opportunity scoring plugin or automated email send)
  • Application Insights logs server-side execution time, dependencies, and exceptions.

3. View Metrics

  • Open your Application Insights resource in the Azure portal and query requests and dependencies to see average durations and percentiles.

D. Probe Asynchronous Queue Lengths

  • In the Power Platform Admin Center, navigate to Environments > [Your Env] > Analytics > Jobs
  • Track the Active Jobs and Waiting Jobs in the asynchronous queue at peak times.

E. Summarize and Report the Results

 Scenario Avg. Form Load (s)  95th %-ile Script Time (ms) Avg. Workflow Time (ms)  Avg. Active Queue Length
 Open Account Form  2.8  450  N/A  N/A
 Save Opportunity (plugin)  N/A  N/A  1,200  N/A
 Run Daily Invoice Workflow  N/A  N/A  3,500  75
 Bulk Update Orders via API  1.5  200  N/A  30

Pro Tip: Collect these metrics for seven days straight to cover workload fluctuations. Compare peak-hour vs. off-peak variation.

You’ll be able to take this fully loaded performance baseline and:

  • Identify the forms, scripts, and workflows that need to be remediated now
  • Quantify the numerical gains after each tuning step.
  • Create a data-driven plan for your Dynamics 365 fine-tuning effort that maximizes business value and user productivity.

2. Optimize Form Design and Client-Side Logic

One of the most frequent reasons for poor Dynamics 365 performance is bloated, busy forms. Every additional field, script, or control adds weight to the page. Here’s how to simplify your forms to create a smooth user experience:

A. Minimize Default Fields and Tabs

1. Audit field usage

  • Leverage the Form Usage report (in the Power Platform admin center) to identify fields that are infrequently populated or viewed.

2. Surface only essentials

  • Design the primary form to highlight the 8–10 fields users access most often.
  • Move secondary information—such as internal notes, custom flags, or legacy attributes—into a separate tab or quick-view form.

3. Use Quick-View Forms and Sub-Grids

  • For related entities (e.g., the top 3 open Cases on an Account), configure a quick-view form that only loads when the user clicks that section.
  • Set sub-grids to display a small number of rows (for example, 5) by default, with a “Show more” link to load additional records on demand.

Why it works: Each field and tab contributes its own DOM elements and data requests. Reducing the initial payload significantly cuts form-load time.

B. Limit onLoad Scripts

1. Consolidate libraries

  • Merge multiple JavaScript files into a single web resource to reduce HTTP requests.

2. Load only what’s needed.

  • Encapsulate functions in conditional guards. For example, only run invoice-related scripts on the Invoice form:

function onLoad(executionContext) {
var formName = executionContext.getFormContext()
.ui.formSelector
.getCurrentItem()
.getLabel();
if (formName !== “Invoice”) return;
initializeInvoiceScripts(executionContext);
}

3. Defer initialization

  • Use setTimeout or subscribe to the OnReadyStateComplete event to delay non-critical scripts until after the form renders.

Why it works: Excessive or unnecessary scripts block rendering. Consolidation and conditional loading reduce client CPU usage and improve load times.

C. Defer Ribbon and Navigation Rendering

1. Use the Ribbon Workbench

  • Remove unused commands from the Home and Form ribbons so only relevant buttons load by default.

2. Apply conditional loading

  • Add enable/display rules to custom ribbon actions so they appear only when appropriate (e.g., show “Generate Report” only when the record status is “Completed”).

Why it helps: Each ribbon command adds client-side metadata and event handlers. Stripping out unused buttons reduces initial page processing.

D. Implement Asynchronous Data Fetching

1. Optimize lookups

  • Delay lookup population by using an addPreSearch handler with setLookupTypes. For example:

function onLoad(executionContext) {
var formContext = executionContext.getFormContext();
var lookupField = formContext.getControl(“new_relatedentityid”);
if (!lookupField) return;
lookupField.addPreSearch(function() {
lookupField.setLookupTypes([“account”, “contact”]);
});
}

2. Lazy-load sub-grids

  • Disable “Enable quick find”, set the “Default number of rows” to a small value, and provide a “Load more” button to fetch additional records on demand.

Why it works: Deferring data calls until user interaction reduces the amount of data transferred on initial load, making the form feel faster.

3. Streamline Business Logic in Workflows and Plugins

A. Minimize the Use of Synchronous Workflows

  • What they are:
    Real-time (synchronous) workflows run immediately, as part of the same transaction as the save. The user waits for them to finish.
  • When to use:
    For business rules that must complete before the transaction is committed—such as checking credit limits or selectively changing data.
  • What not to do:
    Long-running tasks like sending emails, calling out to external systems, or performing complex calculations. These should be kicked off the main thread.
  • How to convert:
  1. Discover existing synchronous processes in the Process Center.
  2. For each, determine if the logic can execute post-save.
  3. Replicate these steps as an asynchronous workflow or Power Automate flow set to execute on record create/update.
  • Pros:
    By offloading non-urgent tasks, users no longer wait on unrelated processing—and save times improve by up to 4×.

B. Tune Plugin Registration

Scope your plugin steps:

In the Plugin Registration Tool you will have to also register your plugin only for the required message (Create, Update, Delete ) and pipeline stage (Pre-Validation, Pre-Operation, Post-Operation).

Don’t enroll in all phases or characteristics:

This makes your code execute a lot more frequently than it needs to.

Filter by selected entity and property:

Set filtering options so it only runs the plugin if certain fields are updated (just when “Status” is changed, for example).

Example configuration:

If you plugin only recalculates a discount because the Price or the Quantity of an Order changes, then register it only for those attributes and not for all updates.

Benefit:

This reduces the number of plugin executions, which subsequently decreases CPU and database load, resulting in improved overall system performance.

C. Mass Update (ExecuteMultipleRequest)

What it is:

The ExecuteMultipleRequest message allows you to cache a set of create, update, and delete operations, and then execute them.

How to implement:

Gather OrganizationRequest objects (like CreateRequest, UpdateRequest) in your plugin code.

You can include these in a collection of ExecuteMultipleRequest.

Pass the batch request to the organization service.

Key settings:

ContinueOnError: whether the batch stops after the first error, or continues.

ReturnResponses: Returns individual responses if necessary for additional logic.

Advantage:

Minimizes network round-trips and database hits, yielding increased throughput, particularly for bulk transactions.

D. Profile and Refactor with the Profiler Plugin

Enable the Profiler:

Inside the plugin registration tool, you can right-click on the plugin and select profile.

Capture execution:

Take the actions that cause your plug-in to execute in Dynamics 365. The profiler captures the entire execution environment.

Analyze the snapshot:

Download the snap file and reply with the registration tool. Check out the full call stack and timer for each method and call.

Identify slow code paths:

Search for time-consuming loops and repeated service calls within loops.

Refactor inefficient loops:

Push service calls out of loops if you can.

Implement early exits when conditions don’t need to be processed.

Get all the data in a single fetch (QueryExpression or FetchXML) instead of nesting queries.

Benefit:

Profiling uncovers bottlenecks in your plugin code that were previously hidden, so you can make use of local optimizations or recoding the most slowest parts.

E. Real-World Example

One retail customer faced frequent account save failures and an average save time of 2.5 seconds as a synchronous plugin validated customer addresses against a third-party API. By:

Shifting the validation logic to a separate module outside the plugin.

Developing an Azure Function to handle validation with async support.

Calling the function on demand using a Power Automate flow after the account record is added.

They achieved:

95% reduction of save errors on accounts (the plugin stopped blocking saves when the external api was unresponsive).

Trimming 40% off saves time (average down to 1.5 seconds from 2.5 seconds).

And with these strategies, minimizing your real-time workflows, tuning up plugin registration, batching, and profiling your code, you’ll help to make sure that your server-side logic is slim and lean, meaning faster, more responsive Dynamics 365 experiences for all your users.

4. Optimize Data Models and Indexing

A good data schema and the right indexing are required for queries and data operations to be efficient in Dynamics 365. Here is a way to tweak and organize your data model for maximum performance:

A: Support for Server Side Paging and Indexing for Large Tables

Why it matters

Fetching thousands of rows at a time will break the server and the client.

How to configure

FetchXML/QueryExpression: Assigning the PageInfo properties – Count, PageNumber, and PagingCookie.

OData queries: Use the $top and $skiptoken parameters to get a subset of records.

Result

The server sends only the “page” of records that the client wants to see, thus decreasing memory consumption and response time.

B. Archive Stale Data

The big picture

Too many big tables are killing searches, exports, and plugin queries.

Archiving strategy

Identify documents more than two years old ( e.g., archived Cases, fulfilled Orders).

Put them in some custom Archive entity in Dataverse or export them to some external Azure SQL store.

Get those archived records out of the main table, or make them otherwise inactive.

Or use Power Automate scheduled flows or Azure Data Factory pipelines to automate nightly archiving.

Result

The main tables become smaller, making queries faster overall and reducing storage costs.

C. Index Lookup Fields

Why it matters

Lookup columns are used to join and are often found in filters and views. Without indexes in place, these queries are causing full table scans.

How to add indexes

In the Power Platform admin center, go to Data > Tables > [Your Entity] > Indexes.

Create a new index on each lookup column (customerid, ownerid).

Result

Queries filtering or ordering by these fields now run in milliseconds instead of tens of seconds.

D. Use Filtered views for complex questions

The big picture

Including whole tables in reports or integrations can lead to slowdowns and resource usage.

How to implement

SQL integrations (e.g., Data Export Service): Use filtered views for the target database that only return the columns and those rows you need (e.g., active records and current fiscal year).

FetchXML / Power BI: Ability to write dataset queries with filter clauses instead of importing full entities.

Outcome

In this scenario, downstream processes operate on a smaller set of pre-filtered data, which means that exports and dashboard refreshes happen faster.

Real-World Example

Fundraising Activities were logged into a single table in Dataverse by a nonprofit and accumulated to more than half a million records. Full table exports were taking 8 minutes, which created delays for reporting. They soon devised an archival solution:

Scheduled retention of all finished campaigns > 2 years into an Azure SQL database in Data Factory.

Active records are indexed on statuscode and ownerid.

Set up intelligent filtered views on Azure SQL for large exports.

After these updates, bulk data exports were taking less than 30 seconds to finish – a 94% reduction in processing time.

When you combine server-side paging, technology lifecycle management with the archiving of old records, indexing for key lookups, and filtered views, you’ll keep your Dataverse tables trim and your queries speed-of-light fast, helping to ensure that your Dynamics365 data model scales efficiently as your organization expands.

Looking for support with Dynamics 365?

With 20+ years of industry experience in ERP and CRM, DAX is proficient in crafting tailored solutions to meet the needs of businesses.

5. Leverage Cache and Server-Side Rendering

Dynamics 365 uses a few levels of caching to prevent going against the database when it doesn’t need to and make reading data faster. You can also improve performance by adjusting these settings and choosing server-side processing:

A. Caching with HTTP

What it does

Tells browsers to save (cache) static files — JavaScript, CSS, images — so they’re not downloaded again for every page visit.

How to configure

In the settings of your web server or CDN, apply a Cache-Control header such as max-age in seconds to all your static assets:

Cache-Control: public, max-age=31536000

Version file names (formScripts. v2. js) so browsers will pull the new script when it changes and not use the old cached one.

Why it matters

Reduces the number of HTTP requests and makes client-side form rendering faster.

B. Caching of Data at Platform-Level

What it is

Caches Recently or Frequently Read Records in Platform Memory to Avoid Repeated Database Queries.

How to enable

Go to the Power Platform admin portal, click:

Environments > [Your Env] > Settings > Product > Performance

Enable Data Caching and the Cache Duration (for example, 5–10 minutes) for entities that are mostly read operations (such as Price Lists or Product Catalogs).

Why it matters

Repeating requests for the same records also can be performed from cache and in the process decrease the workload for the database CPU and the I/O of the server system with the result of a quicker responses.

C. Server-Side Rendering with ExecuteFetchRequest

What it does

Runs efficient SQL on the server, instead of transferring raw data to the client and computing it with client-side LINQ or JavaScript.

How to implement

Use the ExecuteFetchRequest class to send FetchXML requests directly to the server from within your plugins or custom API.

Don’t loop through thousands of records on the client—push filtering/sorting/aggregation back to the server.

Why it matters

Server side processing utilizes precompiled SQL execution plans and database indexes to query only the data that is requested without additional network and CPU costs to the client.

Real-World Example

One insurance company complained about sluggish dashboards for their Policy entity (10,000+ records). They elected to make cache settings and server-side rendering adjustments:

Set up Data Caching on the Policy entity with a TTL of 10 minutes.

Added HTTP Cache-Control with a max-age of one year and versioned filenames to custom form scripts.

Replaced lots of LINQ client-side loops with ExecuteFetchRequest in their reporting plugin.

Result: Dashboard load time went from 12 seconds to under 5 seconds, increasing user satisfaction.

With a combination of HTTP-level caching, platform-level data caching, and server-side query processing, you can take the load off the database and ensure a snappier Dynamics 365 experience so that users get and stay productive.

6. Master Power Platform Performance Tuning

Dynamics 365 is a suite of other products in the family of the so-called “Power Platform.” The better you fine-tune these components, the more smoothly your custom apps and automated flows will run — and the more productive your users will be. Here’s how to make the most of each layer:

A. Availability of Canvas and Model-Driven Apps

Minimize data sources

Each connector adds latency. Include only the tables and views that are absolutely necessary for your app.

For Model-Driven Apps, clean up the metadata in the sitemap by removing it, and hide unneeded tables.

Substitute formula variables instead of repeated checkups.

In Canvas apps, assign the result of a lookup to a local variable wherever OnStart or OnVisible instead of calling the lookup multiple times.

// Instead of the following in each control:

LookUp(Accounts,AccountId= varSelectedId). Name

// Do this once:

Set(varAccount,LookUp(Accounts,AccountId=varSelectedId));

// Then reference:

varAccount.Name

Disable experimental features.

Try to use stable features where you can (eg, modern grids, page components)

Trim unused data sources.

Refresh the Data panel once in a while. Delete connections, tables, or formulas that the app no longer uses.

Why it matters:

Fewer connector calls and reducing double-lookups, reduce server round-trips, and reduce the amount of compute needed on the client-side.

B. Monitor Power Automate Flows

As organizations adopt the Power Automate Flows tools to automate business processes and share data between cloud services, partners need to ensure these flows are appropriately monitored for any compliance violations.

Note the parallels that the program is taking that are bottlenecks.

Find the run history of the flow and what the branches waiting in the same connector.

Consolidate steps that depend on each other or similar tasks into one branch.

Cut back on redundant sign-offs or loops.

Consider replacing multiple-step loops using native Apply to each optimised to process even huge collections in batches.

Archive or deactivate flows that have not been run in 30 days to help keep the environment clean.

Use concurrency control

For triggers such as when an item is created, limiting the concurrency helps prevent the downstream systems from being overloaded.

The big picture: Misconfigured flows are capable of blowing past API limits — and slowing connected apps, which can slow the entire platform.

C. Scale Dataverse Capacity

Monitor throttling events

Look for Throttled Requests in the Power Platform admin center on Metrics.

Scale with additional DTUs

If you are regularly hitting throttling limits, you can ask Microsoft Support for additional Database Throughput Units (DTUs) or upgrade your capacity tier.

Do partitioning and sharding.

For huge tables you might need to have some custom partitioning schemes or use multiple Dataverse environments to distribute data.

Why this matters: Sufficient DTUs provide the performance buffer your apps and integrations require to respond to times of spikes in usage without slowdowns.

Real-World Example

A logistics company found that its custom Model-Driven App for tracking shipments had slow load times. They were optimized by:

Turning off unfinished experimental grid features is still a Preview feature.

Removed three tables that were lying around the app’s sitemap.

Instead of calling LookUp multiple times, cache the lookup values in the form’s OnLoad event.

Result: Average form render times increased by 45% (from 4.4s to 2.4s) – sending user satisfaction and throughput through the roof.

And when you fine-tune your Canvas and Model-Driven Apps, monitor and refine Power Automate flows, and maintain adequate Dataverse capacity, you can deliver smoother, faster experiences across the Power Platform, amplifying the productivity gains from your Dynamics 365 investment.

7. Use Azure Integration for Heavy-Load Scenarios

Dynamics 365 is part of the larger Power Platform environment. More information can be found here. Improving its performance, and in particular, where it manages a high volume of data operations and complex integrations, is a huge factor for the improvement that can be achieved by using Azure services. These services provide scalability and efficiencies across use cases.

A. Offload Heavy Imports Using Azure Data Factory

What it is

Azure Data Factory (ADF) is a cloud-based, Extract, Transform, Load (ETL) service that is used to move and process massive amounts of data from different sources.

How to configure

Stage data to Azure Blob Storage: You can use ADF to copy and ingest massive datasets (for example, large CSV files) to Azure Blob Storage.

Transform and load into Dataverse

Use of ADF pipelines to transform the data as required and ingest into Dataverse with the available connectors.

Between the lines

This strategy offloads data processing from Dataverse into computationally intensive tasks, thus increasing API call capacity and overall rate of the system.

B. Azure Functions for Custom Logic

What is it

Azure Functions is a serverless compute service that lets you run event-triggered code without having to explicitly provision infrastructure. It’s perfect for executing your own logic on event or trigger.

How to implement

Event-based trigger functions – Trigger an Azure Function based on events in Dataverse, like records being created or updated.

CPU-bound Work Handlers

Use these for CPU-bound activities like document generation, heavy calculations, or data manipulation.

Why it matters

Offloading intensive processing to Azure Functions reduces the load on Dataverse, improves performance and responsiveness.

C. Scaling based on Events with Azure Event Grid

What it is

Azure Event Grid is a fully managed event-routing service that enables you to build event-based architectures and allows you to easily respond to events with intelligent event filtering.

How to implement

Configure Dataverse to publish events: Set Dataverse up to emit events (for example, record changes) to Azure Event Grid.

Send events to the processing pipeline: Use Event Grid to route these events to Azure Functions or other services for processing.

Why it matters: The structure allows for on-demand scaling and accommodation of changes to the data — within the system design.

Real-World Example

A procurement client had a nightly invoice process that was taking around 4 hours. They saw significant gains when they moved this functionality to an Azure Function that is triggered by Event Grid from Dataverse:

Speedy processing: The batches took 4 hours to process, but now take only 20 minutes.

Better scalability: The application can deal with big data sets without losing performance.

Enhanced system response: Customers were served faster with the generated invoices and were more satisfied.

With the combination of Azure services (such as Dataverse, Data Factory, Functions, and Event Grid) handling heavy loads, organizations can easily manage at scale and guarantee a responsive Dynamics 365 environment wherever they are located.

8. Implement Robust Monitoring and Alerts

Performance monitoring 24/7 for sustained productivity gains:

Power Platform Admin Center

Watch the latency, API requests, and database growth.

Azure Application Insights

Time of your plugins and custom APIs for end-to-end tracing. Add alarms to be notified if the error rate or execution time exceeds a certain threshold.

Custom telemetry

Log client-side performance metrics back to Azure from JavaScript with XrmTelemetry.

Example

A software house defined warnings for form load time greater than 2 seconds. Proactive notifications empowered their support team to handle rogue scripts before the end-user became aware.

9. Adopt Governance and Best Practices

Effective governance works hand in hand with technical tinkering:

Solution segmentation: divide customizations on a per-request type (Foundation/ Feature/ Hotfix) basis.

Release pipelines: Automatically deploy with Azure DevOps or GitHub Actions to guarantee consistency across environments.

Scheduled Timely: Quarterly Health check-ups with out-of-box (OrgInsights) or tools like xRMToolBox(MetadataDocumentGenerator).

Training and docs: Provide admins and customizers with best practices for the use of the component.

10. When Contact DAX Software Solution

Yet, no matter how great the work, complex environments typically require expert guidance. DAX Software Solutions provides:

  • High-quality diagnostics and performance evaluations.
  • Microsoft has holistic tuning covering D365, Power Platform, and Azure.
  • Pre-built and custom Power Platform performance tuning workshops and consulting.

Take a look at our DAX software homepage. When you want to fuel rapid progress and work like a well-oiled machine, Get in touch with DAX Software Solution now for expert advice and hands-on assistance.

Through the use of these Dynamics 365 fine tuning and tweaks, you can reduce friction, make the daily toil a little faster, and remove barriers that enable your users to do what they do best—drive your business. Regular, ongoing monitoring and tuning make certain that your Dynamics investment is operating at peak efficiency long after you go live.

Looking for support with Dynamics 365?

With 20+ years of industry experience in ERP and CRM, DAX is proficient in crafting tailored solutions to meet the needs of businesses.

Scroll to Top