Firebase Cloud Function with Firestore returning "Deadline Exceeded"

26,190

Solution 1

Firestore has limits.

Probably “Deadline Exceeded” happens because of its limits.

See this. https://firebase.google.com/docs/firestore/quotas

Maximum write rate to a document 1 per second

https://groups.google.com/forum/#!msg/google-cloud-firestore-discuss/tGaZpTWQ7tQ/NdaDGRAzBgAJ

Solution 2

I have written this little script which uses batch writes (max 500) and only write one batch after the other.

use it by first creating a batchWorker let batch: any = new FbBatchWorker(db); Then add anything to the worker batch.set(ref.doc(docId), MyObject);. And finish it via batch.commit(). The api is the same as for the normal Firestore Batch (https://firebase.google.com/docs/firestore/manage-data/transactions#batched-writes) However, currently it only supports set.

import { firestore } from "firebase-admin";

class FBWorker {
    callback: Function;

    constructor(callback: Function) {
        this.callback = callback;
    }

    work(data: {
        type: "SET" | "DELETE";
        ref: FirebaseFirestore.DocumentReference;
        data?: any;
        options?: FirebaseFirestore.SetOptions;
    }) {
        if (data.type === "SET") {
            // tslint:disable-next-line: no-floating-promises
            data.ref.set(data.data, data.options).then(() => {
                this.callback();
            });
        } else if (data.type === "DELETE") {
            // tslint:disable-next-line: no-floating-promises
            data.ref.delete().then(() => {
                this.callback();
            });
        } else {
            this.callback();
        }
    }
}

export class FbBatchWorker {
    db: firestore.Firestore;
    batchList2: {
        type: "SET" | "DELETE";
        ref: FirebaseFirestore.DocumentReference;
        data?: any;
        options?: FirebaseFirestore.SetOptions;
    }[] = [];
    elemCount: number = 0;
    private _maxBatchSize: number = 490;

    public get maxBatchSize(): number {
        return this._maxBatchSize;
    }
    public set maxBatchSize(size: number) {
        if (size < 1) {
            throw new Error("Size must be positive");
        }

        if (size > 490) {
            throw new Error("Size must not be larger then 490");
        }

        this._maxBatchSize = size;
    }

    constructor(db: firestore.Firestore) {
        this.db = db;
    }

    async commit(): Promise<any> {
        const workerProms: Promise<any>[] = [];
        const maxWorker = this.batchList2.length > this.maxBatchSize ? this.maxBatchSize : this.batchList2.length;
        for (let w = 0; w < maxWorker; w++) {
            workerProms.push(
                new Promise((resolve) => {
                    const A = new FBWorker(() => {
                        if (this.batchList2.length > 0) {
                            A.work(this.batchList2.pop());
                        } else {
                            resolve();
                        }
                    });

                    // tslint:disable-next-line: no-floating-promises
                    A.work(this.batchList2.pop());
                }),
            );
        }

        return Promise.all(workerProms);
    }

    set(dbref: FirebaseFirestore.DocumentReference, data: any, options?: FirebaseFirestore.SetOptions): void {
        this.batchList2.push({
            type: "SET",
            ref: dbref,
            data,
            options,
        });
    }

    delete(dbref: FirebaseFirestore.DocumentReference) {
        this.batchList2.push({
            type: "DELETE",
            ref: dbref,
        });
    }
}

Solution 3

In my own experience, this problem can also happen when you try to write documents using a bad internet connection.

I use a solution similar to Jurgen's suggestion to insert documents in batch smaller than 500 at once, and this error appears if I'm using a not so stable wifi connection. When I plug in the cable, the same script with the same data runs without errors.

Solution 4

I tested this, by having 15 concurrent AWS Lambda functions writing 10,000 requests into the database into different collections / documents milliseconds part. I did not get the DEADLINE_EXCEEDED error.

Please see the documentation on firebase.

'deadline-exceeded': Deadline expired before operation could complete. For operations that change the state of the system, this error may be returned even if the operation has completed successfully. For example, a successful response from a server could have been delayed long enough for the deadline to expire.

In our case we are writing a small amount of data and it works most of the time but loosing data is unacceptable. I have not concluded why Firestore fails to write in simple small bits of data.

SOLUTION:

I am using an AWS Lambda function that uses an SQS event trigger.

  # This function receives requests from the queue and handles them
  # by persisting the survey answers for the respective users.
  QuizAnswerQueueReceiver:
    handler: app/lambdas/quizAnswerQueueReceiver.handler
    timeout: 180 # The SQS visibility timeout should always be greater than the Lambda function’s timeout.
    reservedConcurrency: 1 # optional, reserved concurrency limit for this function. By default, AWS uses account concurrency limit    
    events:
      - sqs:
          batchSize: 10 # Wait for 10 messages before processing.
          maximumBatchingWindow: 60 # The maximum amount of time in seconds to gather records before invoking the function
          arn:
            Fn::GetAtt:
              - SurveyAnswerReceiverQueue
              - Arn
    environment:
      NODE_ENV: ${self:custom.myStage}

I am using a dead letter queue connected to my main queue for failed events.

  Resources:
    QuizAnswerReceiverQueue:
      Type: AWS::SQS::Queue
      Properties:
        QueueName: ${self:provider.environment.QUIZ_ANSWER_RECEIVER_QUEUE}
        # VisibilityTimeout MUST be greater than the lambda functions timeout https://lumigo.io/blog/sqs-and-lambda-the-missing-guide-on-failure-modes/

        # The length of time during which a message will be unavailable after a message is delivered from the queue.
        # This blocks other components from receiving the same message and gives the initial component time to process and delete the message from the queue.
        VisibilityTimeout: 900 # The SQS visibility timeout should always be greater than the Lambda function’s timeout.

        # The number of seconds that Amazon SQS retains a message. You can specify an integer value from 60 seconds (1 minute) to 1,209,600 seconds (14 days).
        MessageRetentionPeriod: 345600  # The number of seconds that Amazon SQS retains a message. 
        RedrivePolicy:
          deadLetterTargetArn:
            "Fn::GetAtt":
              - QuizAnswerReceiverQueueDLQ
              - Arn
          maxReceiveCount: 5 # The number of times a message is delivered to the source queue before being moved to the dead-letter queue.
    QuizAnswerReceiverQueueDLQ:
      Type: "AWS::SQS::Queue"
      Properties:
        QueueName: "${self:provider.environment.QUIZ_ANSWER_RECEIVER_QUEUE}DLQ"
        MessageRetentionPeriod: 1209600 # 14 days in seconds

Solution 5

If the error is generate after around 10 seconds, probably it's not your internet connetion, it might be that your functions are not returning any promise. In my experience I got the error simply because I had wrapped a firebase set operation(which returns a promise) inside another promise. You can do this

return db.collection("COL_NAME").doc("DOC_NAME").set(attribs).then(ref => {
        var SuccessResponse = {
            "code": "200"
        }

        var resp = JSON.stringify(SuccessResponse);
        return resp;
    }).catch(err => {
        console.log('Quiz Error OCCURED ', err);
        var FailureResponse = {
            "code": "400",
        }

        var resp = JSON.stringify(FailureResponse);
        return resp;
    });

instead of

return new Promise((resolve,reject)=>{ 
    db.collection("COL_NAME").doc("DOC_NAME").set(attribs).then(ref => {
        var SuccessResponse = {
            "code": "200"
        }

        var resp = JSON.stringify(SuccessResponse);
        return resp;
    }).catch(err => {
        console.log('Quiz Error OCCURED ', err);
        var FailureResponse = {
            "code": "400",
        }

        var resp = JSON.stringify(FailureResponse);
        return resp;
    });

});
Share:
26,190

Related videos on Youtube

Scott D
Author by

Scott D

Senior iOS Software Engineer

Updated on December 04, 2020

Comments

  • Scott D
    Scott D over 3 years

    I took one of the sample functions from the Firestore documentation and was able to successfully run it from my local firebase environment. However, once I deployed to my firebase server, the function completes, but no entries are made in the firestore database. The firebase function logs show "Deadline Exceeded." I'm a bit baffled. Anyone know why this is happening and how to resolve this?

    Here is the sample function:

    exports.testingFunction = functions.https.onRequest((request, response) => {
    var data = {
        name: 'Los Angeles',
        state: 'CA',
        country: 'USA'
    };
    
    // Add a new document in collection "cities" with ID 'DC'
    var db = admin.firestore();
    var setDoc = db.collection('cities').doc('LA').set(data);
    
    response.status(200).send();
    });
    
    • Ramon
      Ramon over 6 years
      Not sure if it's related to the error you're seeing but you probably want to wait for the promise returned by doc(...).set(data) to resolve, by using return db.collection('cities').doc('LA').set(data).then(result => response.status(200))
    • Scott D
      Scott D over 6 years
      @Ramon changing it to a promise did remove the error from the logs, but unfortunately did not successfully insert the data into the collection.
  • KD.
    KD. over 6 years
    any workaround suggestion on this? I want to do an intial data import of 1500000 data while getting it processed using node. Any suggestion appreciated.
  • Nobuhito Kurose
    Nobuhito Kurose over 6 years
    @KD. 1 per second limit is for to a document. For database, the limit is "Maximum writes per second per database | 2,500 (up to 2.5 MiB per second)". you can avoid this limits using setTimeout though it takes time.
  • KD.
    KD. over 6 years
    Thanks. Adding timeout would be my last option to try as I have too much data for initial input. I really wish there was a way to import JSON just like real-time db. For now I am going ahead with real-time db since the same approach works fine with it.
  • Jürgen Brandstetter
    Jürgen Brandstetter about 6 years
    I have "solved" it with 2 steps. 1) I use batch writes. And set the batch to 490 (max is apparently 500) 2) I wait that each batch finished, before I sent the next one.
  • ilya
    ilya over 5 years
    I'm getting the same error even when there is only 20 items set, why is that? the document key is unique in all documents (timestamp) what could be wrong?
  • ilya
    ilya over 5 years
    it was because of its big size, it need more than a second to be written to a document
  • atereshkov
    atereshkov about 4 years
    Thank you for the script. Just one thing - what's the difference between this script and batch write from firebase?
  • Jürgen Brandstetter
    Jürgen Brandstetter about 4 years
    The major difference is, it's not using transactions. The issue with Firebase batch write is, it sends e.g 100 updates. Waits for all to finish writing into the DB, and then you can write the next batch. I had the chance to talk to the Firebase team myself, and they said, it's better to just directly write without batch. Except if you actually need to make sure all write happen at once.
  • atereshkov
    atereshkov about 4 years
    Ok, got it. I've tried to use your script for delete operation and it's strange, but I got deadline exceeded error even using the script. No idea why, will check it out one more time. It only delete 100 documents and then I get the error. And then I can also delete 100 documents and then I'll get the error. Looks like some limitation for 100 operations here. Any thoughts?
  • atereshkov
    atereshkov about 4 years
    Ok, fixed it by setting _maxBatchSize to 100.
  • Jürgen Brandstetter
    Jürgen Brandstetter about 4 years
    Good to hear. I believe that means that your object are pretty large.