menu
up-caret
down-caret

ZEIT

Our mission is to make cloud computing as easy and accessible as mobile computing. You can find our Next.js community here.

# Now

Instant global deployments

Trending conversations
503 on deployment
@codelayer · 22d
BRU1 instances are timing out
@helguita · 31d
Images not showing on CRA deployment
@passingmagic · 4h
$ now --target doing nothing
@spy-seth · 13h
Deploying Node v6
@mkern · 34d

Implications of Now v2 for GraphQL servers

ZEIT/Now · November 9, 2018 at 8:33am

Implications of Now v2 for GraphQL servers

November 9, 2018 at 8:33am (Edited 4 months ago)

I'm trying to understand the implications of Now v2 for GraphQL servers.

As far as I know, a typical GraphQL API exposes a single endpoint like /graphql and uses Queries and Mutation to execute different actions.

However, a typical REST API exposes different actions on different endpoints.

After reading the documentation and announcement for Now v2 I can see how one would split a big server.js file with multiple Express routes into smaller files and passing them through different builders to create multiple lambdas.

I'm asking myself, is there a way to achieve the same with GraphQL resolvers?


November 9, 2018 at 9:38am

I was wondering the same thing! Though quite frankly, I don't like the idea of having to change the entire GraphQL server structure to support the hosting environment; as it currently is, I can host on AWS, GCF, etc. all without changing a line of code. But that 5MB max bundle size per route is going to kill any chance I have of running it on Now.

like-fill
3
  • reply
  • like

What about creating Lambdas for every resolver and just forwarding GraphQL Requests?

  • reply
  • like

But a resolver isn't a "stand-alone" http endpoint, it's all handled through the schema and Apollo Server (for example). There's only one endpoint.

Apollo Server lambda has some limitations, and even then it's a single endpoint to trigger it all. There is no way it's going to be less than 5MB.

like-fill
1
  • reply
  • like

I was wondering the same thing! Though quite frankly, I don't like the idea of having to change the entire GraphQL server structure to support the hosting environment; as it currently is, I can host on AWS, GCF, etc. all without changing a line of code. But that 5MB max bundle size per route is going to kill any chance I have of running it on Now.

With Serverless framework you can host lambdas on all of those platforms (as well as now [not using Serverless])

  • reply
  • like

Yeah, I know, we're using Apex Up at the moment. But, I just wondered if Zeit had even considered GraphQL APIs

  • reply
  • like

But a resolver isn't a "stand-alone" http endpoint, it's all handled through the schema and Apollo Server (for example). There's only one endpoint.

Apollo Server lambda has some limitations, and even then it's a single endpoint to trigger it all. There is no way it's going to be less than 5MB.

I think I would deploy λ /graphql, λ /resolvers/cat, λ /resolvers/updateCat, etc.

λ /graphql parses the GraphQL query, fetches λ /resolvers/cat and returns the result. Should be less than 5 MB. Dependencies: apollo-server.

λ /resolvers/cat gets cat data from the database, execute some business logic. Should be less than 5 MB. Dependencies: mongodb.

Acutally, I already store the code of my resolvers in subfolders like /resolvers/queries/cat.

Maybe this does makes sense?

Edited
like-fill
1
  • reply
  • like

I think I would deploy λ /graphql, λ /resolvers/cat, λ /resolvers/updateCat, etc.

λ /graphql parses the GraphQL query, fetches λ /resolvers/cat and returns the result. Should be less than 5 MB. Dependencies: apollo-server.

λ /resolvers/cat gets cat data from the database, execute some business logic. Should be less than 5 MB. Dependencies: mongodb.

Acutally, I already store the code of my resolvers in subfolders like /resolvers/queries/cat.

Maybe this does makes sense?

Oh I agree, something like that could work! Though I think you'd get a lot of cold-starts for your resolvers, additional latency (as a 2nd call request would need to be resolved before returning) and you'd make it so that your code wouldn't easily run anywhere else without major changes.

  • reply
  • like

This is one of the things I love about up, you write a normal Apollo Server, using express (under the hood), it runs locally, it runs locally behind the up proxy for testing and it runs in the cloud exactly the same way.

  • reply
  • like

Why wouldn't @basst's approach work with all the other cloud provider's lambda solutions? You would just want to have multiple handlers aws-handler.js, now-handler.js, gc-hanlder.js to transform the requests and responses per platform. Plus serverless-offline simulates lambdas locally in a simulating AWS.

Edited
  • reply
  • like

Why wouldn't @basst's approach work with all the other cloud provider's lambda solutions? You would just want to have multiple handlers aws-handler.js, now-handler.js, gc-hanlder.js to transform the requests and responses per platform. Plus serverless-offline simulates lambdas locally in a simulating AWS.

Ok, so you build your source code to have a graphql.js file, this then fires off individual resolvers as-required (potentially many, for a single request) which are other lambda functions. (Now you can't just use the battle-tested Apollo Server, you have to roll your own)

How do you even hope to run this locally, for development? You say you could use the Serverless Framework... True, you could. But then why the hell would you be using Now in the first place, and not just do it all through the Serverless Framework (develop, test and deploy?). Why add yet another moving part to a system which has everything it needs?

Edited
  • reply
  • like

Looks like there'll be some more magic available: https://twitter.com/rauchg/status/1060604744954085376?s=19

like-fill
3
  • reply
  • like

Looks like there'll be some more magic available: https://twitter.com/rauchg/status/1060604744954085376?s=19

That'll be neat... keeps it all in one solution at least. For now though, I can't see this being a usable solution for a production GraphQL API project.

  • reply
  • like

I think I would deploy λ /graphql, λ /resolvers/cat, λ /resolvers/updateCat, etc.

λ /graphql parses the GraphQL query, fetches λ /resolvers/cat and returns the result. Should be less than 5 MB. Dependencies: apollo-server.

λ /resolvers/cat gets cat data from the database, execute some business logic. Should be less than 5 MB. Dependencies: mongodb.

Acutally, I already store the code of my resolvers in subfolders like /resolvers/queries/cat.

Maybe this does makes sense?

I would be interested to see some performance benchmarks for this configuration across the different lambda providers. I think this could work if it's fast enough.

  • reply
  • like

November 12, 2018 at 8:34am