Introduction

This article guides developers through the process of integrating a customized cache solution into their Prisma projects, diverging from conventional recommendations for extensions like prisma-redis-middleware and prisma-extension-cache-manager. Instead of advocating pre-built solutions, the focus is empowering developers to craft a tailored cache system aligned with their project's needs. By controlling the Prisma framework's caching strategy, developers can efficiently optimize database queries and enhance application performance. This approach, utilizing cache mechanisms to store query results and minimize resource-intensive executions, contributes to a more responsive and streamlined user experience, which is particularly beneficial for applications managing substantial datasets.

Query Optimization Strategies

1. Selective Field Retrieval

When querying data with Prisma, it's essential to only request the fields your application needs. This minimizes the data transferred between the database and the application, resulting in faster query execution.

const user = await prisma.user.findUnique({
  where: {
    id: 1,
  },
  select: {
    id: true,
    username: true,
    email: true,
  },
});

example of selective field retrieval

In this example, we're explicitly selecting only the id, username, and email fields, reducing the payload size and improving response times.

2. Batching Queries for Efficiency

Prisma allows developers to batch multiple queries into a single database call, minimizing the overhead associated with numerous requests.

const userPosts = await prisma.user.findUnique({
  where: {
    id: 1,
  },
  select: {
    id: true,
    username: true,
    email: true,
    posts: {
      select: {
        id: true,
        title: true,
        content: true,
      },
    },
  },
});

User Details and User posts are fetched in a single query.

Here, we efficiently fetch both user details and associated posts in a single query, optimizing the data retrieval process.

3. Implementing Pagination

For large datasets, implementing pagination ensures that you fetch only the necessary amount of data, preventing unnecessary strain on the database and improving query performance.


const users = await prisma.user.findMany({
  take: 10, // Number of items per page
  skip: 10, // Skip the first 10 items on the current page
});

Implementing Pagination

Pagination is essential for applications dealing with extensive datasets, as it allows for a more responsive user experience and reduces database load.

Logic Behind Caching

Before delving into the code, it's essential to grasp the potential structures of queries. When querying an entity, various methods exist to assemble the where clause. Nonetheless, caching can be a fairly uncomplicated process in this context. The approach involves taking the entity's name and normalizing the object to ensure it can be transmitted in any conceivable order. By ensuring uniformity in the fields, the cache can effectively recognize the execution of a query. After normalization, a hash is generated to configure the key to appear as follows:

entity:generated_hash.


This process ensures a systematic representation, allowing for efficient caching mechanisms.

 import { Prisma, PrismaClient } from '@prisma/client';

type ICacheExtension = {
  models: string[];
};

const OPERATION_FIND_FIRST = 'findFirst';
const OPERATION_FIND_UNIQUE = 'findUnique';

const cachedOperations = [
  OPERATION_FIND_FIRST,
  OPERATION_FIND_UNIQUE
];

const cacheExtension = ({ models }: ICacheExtension) => {
  return Prisma.defineExtension({
    name: 'cache',
    query: {
      $allModels: {
        async $allOperations({ model, operation, args, query }) {
          if (
            !models.includes(model) ||
            !cachedOperations.includes(operation)
          ) {
            return query(args);
          }

          const key = generateUniqueKey(args);
          return cacheClient.get(model, key, async () => query(args));
        }
      }
    }
  });
};

const cacheConfig = cacheExtension({
  models: ['product']
});

const prismaClient = new PrismaClient().$extends(cacheConfig);

While the previous steps in our caching strategy have been relatively straightforward, complexities arise when dealing with updates to records within the entity. The challenge lies in determining which cache entry to delete when modifications occur. The key consists of the entity name and a hash, varying based on the requested fields and the combination used in the where clause. This structure potentially leads to multiple cached instances of a record, each with distinct search criteria and field combinations.

The seemingly convenient approach of deleting "entity*" poses impracticality in a dynamic system where records undergo constant updates, triggering cache clearance every second. To address this, a more nuanced solution involves designating a specific field as the cacheKeyField. Caching only occurs when this designated field is present, allowing the generation of a key in the format

"entity:cacheKeyField-generated_hash."

Consequently, during record updates, selectively deleting the cache entry
"entity:cacheKeyField*"
becomes the means to effectively clear the cache for that particular record.

As we delve into the subsequent exploration of code implementation, we'll witness how this thoughtful solution is translated into practical execution within the context of our Prisma project.

import { Prisma, PrismaClient } from '@prisma/client';

type ICacheExtension = {
  models: {
    model: string;
    cacheKeyField: string;
  }[];
};

const OPERATION_FIND_FIRST = 'findFirst';
const OPERATION_FIND_UNIQUE = 'findUnique';
const OPERATION_UPDATE = 'update';
const OPERATION_DELETE = 'delete';

const cachedOperations = [
  OPERATION_FIND_FIRST,
  OPERATION_FIND_UNIQUE,
  OPERATION_UPDATE,
  OPERATION_DELETE
];

const cacheExtension = ({ models }: ICacheExtension) => {
  const { cacheableModels, cacheKeyFields } = models.reduce<{
    cacheableModels: string[];
    cacheKeyFields: Record<string, string>;
  }>(
    (acc, { model, cacheKeyField }) => {
      acc.cacheableModels.push(model);
      acc.cacheKeyFields[model] = cacheKeyField;
      return acc;
    },
    {
      cacheableModels: [],
      cacheKeyFields: {}
    }
  );

  return Prisma.defineExtension({
    name: 'cache',
    query: {
      $allModels: {
        async $allOperations({ model, operation, args, query }) {
          if (
            !cacheableModels.includes(model) ||
            !cachedOperations.includes(operation)
          ) {
            return query(args);
          }

          const cacheKeyField = cacheKeyFields[model];
          const fieldValue = (args as Record<string, any>)?.where?.[
            cacheKeyField
          ];

          const shouldBypassCache =
            !fieldValue &&
            (operation === OPERATION_FIND_FIRST ||
              operation === OPERATION_FIND_UNIQUE);
          if (shouldBypassCache) {
            return query(args);
          }

          if (
            operation === OPERATION_UPDATE ||
            operation === OPERATION_DELETE
          ) {
            args.select = {
              [cacheKeyField]: true,
              ...(args?.select ?? {})
            };
            return cacheClient.executeWithCacheInvalidationByField(
              model,
              cacheKeyField,
              async () => query(args)
            );
          }

          const key = `${fieldValue}-${generateUniqueKey(args)}`;
          return cacheClient.get(model, key, async () => query(args));
        }
      }
    }
  });
};

const cacheConfig = cacheExtension({
  models: [
    {
      model: 'product',
      cacheKeyField: 'id'
    }
  ]
});

const prismaClient = new PrismaClient().$extends(cacheConfig);

Prisma extension optimizing 'product' CRUD operations with cache handling.

Our progress reveals that we've transitioned to passing an array of objects to the extension, each comprising the model's name and the cacheKeyField. This approach has facilitated the creation of a structured data format for future validation, essentially forming a hash table for streamlined access to the cacheKeyField. Our implementation has simplified the process and laid a solid foundation for efficient cache management.

const { cacheableModels, cacheKeyFields } = models.reduce<{
  cacheableModels: string[];
  cacheKeyFields: Record<string, string>;
}>(
  (acc, { model, cacheKeyField }) => {
    acc.cacheableModels.push(model);
    acc.cacheKeyFields[model] = cacheKeyField;
    return acc;
  },
  {
    cacheableModels: [],
    cacheKeyFields: {}
  }
);


When initiating a query for the entity, our approach involves a careful assessment of the where clause. If the cacheKeyField is absent within this clause, we deliberately choose to bypass the cache. This strategic decision acknowledges scenarios where the absence cacheKeyField would render caching impractical or irrelevant, aligning with optimal and resource-conscious query execution.

Conversely, when the cacheKeyField is present in the where clause, our implementation seamlessly transitions into the next phase. In this scenario, we embark on the creation of a key designated for the cache entry, utilizing the specified cacheKeyField. This meticulous process ensures that the cache is leveraged efficiently, adhering to the principle of selective caching based on the presence of key fields. By incorporating these conditional steps, our system aims for nuanced and adaptive cache utilization, contributing to the overall efficiency of the query execution process.

const cacheKeyField = cacheKeyFields[model];
const fieldValue = (args as Record<string, any>)?.where?.[cacheKeyField];

const shouldBypassCache =
  !fieldValue &&
  (operation === OPERATION_FIND_FIRST ||
    operation === OPERATION_FIND_UNIQUE);

if (shouldBypassCache) {
  return query(args);
}

In the context of cache invalidation during updates or deletes, our strategy involves intentionally including the chosen cacheKeyField in the query's return. This augmentation serves a dual purpose by enhancing the query result's completeness and ensuring precise cache invalidation during subsequent operations on the record. This meticulous approach aligns with our broader objective of maintaining data integrity within the cache, particularly when records undergo modifications or removal. Through this thoughtful integration, our system establishes a robust foundation for accurate and context-aware cache management, enhancing the reliability and effectiveness of the overall caching mechanism.

if (
  operation === OPERATION_UPDATE ||
  operation === OPERATION_DELETE
) {
  args.select = {
    [cacheKeyField]: true,
    ...(args?.select ?? {})
  };
  return cacheClient.executeWithCacheInvalidationByField(
    model,
    cacheKeyField,
    async () => query(args)
  );
}

The method executeWithCacheInvalidationByField executes the provided action and then invalidates the cache by removing the corresponding entry using the value of the cacheKeyField from the result.

async executeWithCacheInvalidationByField<T = any>(
  table: string,
  cacheKeyField: string,
  action: () => Promise<T>
): Promise<T> {
  const result = await action();
  await this.delete(table, `${result?.[cacheKeyField]}*`);
  return result;
}

Conclusions

This guide delved into the critical aspect of optimizing database queries for high-performance applications, emphasizing the deployment of cache systems to mitigate the impact of repeated, resource-intensive database executions. Focusing specifically on Prisma projects, we rejected off-the-shelf solutions in favour of a tailored approach, showcasing the creation of a flexible cache solution. By understanding query structures, implementing cache logic, and introducing a cacheKeyField concept for precise cache invalidation during updates or deletes, developers now possess a concise yet comprehensive toolkit for seamlessly integrating efficient cache mechanisms into their Prisma applications. This strategy promises enhanced performance while accommodating the unique needs of individual projects.



Thank you for reading this article. Catch you in the next one.