Replies: 13 comments 32 replies
-
Snowpack supports H2! More info here: #1236 If you're able to give that a try, I'd love to hear if H2 solves your perf problem with full refresh. Also, can you confirm that you're using Snowpack over a local network? Even at 600 files, if each takes 10-20ms to load I'd be suprised if that ever took more than a second, since the browser fetches them in parallel. Any details you can share (screenshot of network panel, etc) would be great to help debug! Re: true local caching: we can definitely investigate this more, but I'd first want to understand more about what you're currently seeing. |
Beta Was this translation helpful? Give feedback.
-
Thank you for your response and this awesome tool. I am using Snowpack on localhost. Each file takes a few ms but you add them up and content load completes in 5-6 seconds. The last file download starts at 2.93 seconds. The I pasted a few screenshots below from network panel for my dev server. You can see I setup the certificates and the calls are https but not using h2. Is there anything else I need to do to enable it? Thank you, |
Beta Was this translation helpful? Give feedback.
-
Thank you, Fred. That sounds great. Checking etags upfront should speed up 304s significantly. I can't wait to test it. I checked it with a create snowpack app and somehow it seems to be working with h2. I'll do some more troubleshooting to determine what's causing it to start using h1 with the bigger app. Thank you, |
Beta Was this translation helpful? Give feedback.
-
Ok, it looks like it's the proxy that causes h2 to not work: proxy: { '/api': 'http://localhost:3002/api' } I saw an issue that talks about this, saying that when proxy is present, cannot use h2 and falls back to https but I don't understand why it can't be proxied. I mean I'm specifying in the proxy that it should use http. Why can't it use h2 for the front end and send the requests to the server with h1? This is a very common setup. What do I need to do to get h2 working with a proxy? Have the API server accept h2 as well? Thank you, |
Beta Was this translation helpful? Give feedback.
-
It seems to be updated here: #516
Would it be possible to review the code that uses http-proxy and see what can be done about this? Thank you, |
Beta Was this translation helpful? Give feedback.
-
I checked it and it looks like there is an open issue on http-proxy for http2 support: http-party/node-http-proxy#1237 At the end of the thread, this project is mentioned: https://github.com/nxtedition/node-http2-proxy Would it be possible to use it? Thank you, |
Beta Was this translation helpful? Give feedback.
-
I tested the hypothesis in my development environment. I was able to get http2 proxy server working and added a quick and dirty etags cache for quick 304 responses. Here's the PR for it (not to merge, just for view, test and analysis): http2-proxy configuration is different than http-proxy, so you will see a bunch of bad code suppressing type errors and this would be a breaking change but like I said, this is just for testing. It seemed to work good enough in my development environment. The results are not great. I do see some improvement with these changes but not sub-second loads like I was expecting. Ultimately there is too much congestion when you have 600 files getting loaded from the server. It completes between 3-7 seconds. Here are some screenshots from network tab: I think there is no way around it. If you want to develop big apps with a bunch of files, Snowpack needs to have true local caching with content hashing. I see there is a plugin for content hashing by @akejolin (https://github.com/akejolin/snowpack-plugin-content-hash) but it only works for build - not for dev server. I think we need a content hashes to be stored in an import tree. All files should be imported with these content hash suffixes (dynamically set in memory). When a file changes, snowpack would read the new contents and update the content hash for it. Then, it would update the content hashes for direct ancestors in the import tree as well so that browser would be able to request the new file. This ancestor hash update would not be based on the contents of course since the ancestors' contents are not changing but it would be a random hash. When the page refreshes, snowpack would send index.js with a new content hash. Its children would be the same hash except for the direct line to the recently changed file. Does that make sense? Let me know what you think. I really need faster development cycles. Thank you, |
Beta Was this translation helpful? Give feedback.
-
If I can pop in to this thread, I was looking into another http2 issue, trying to combine snowpack and backend server. I was never successful with the proxy setting in snowpack config. I also tried the “experiments.middleware” (sorry, spelling might not be correct) to load a backend express app directly, but this had an issue in http2 mode due to my express app not supporting raw http2. Finally I tried that spdy module and it worked great for me. If snowpack used spdy module directly it might work automatically with the existing proxy tool? |
Beta Was this translation helpful? Give feedback.
-
Another thing, we could add is some experimental HTTP3 over QUIC support. Something like this. https://github.com/trivikr/node-http-servers/blob/master/quic.js Resources: Lastly, another improvement to the dev server we can do is to move all files served to a streaming response. |
Beta Was this translation helpful? Give feedback.
-
I don't have much to add but I'm glad this is being looked at. I was starting to consider whether I needed to reorganise my preferred directory structure to avoid loading so many files. I'm using CSS modules and the styles aren't hot reloading so I've been refreshing a lot. |
Beta Was this translation helpful? Give feedback.
-
I am having the same problem after having switched from webpack bundling during dev to snowpack and the HMR with Fast Refresh is great 👍 but the initial page loads are all much, much longer, which is surprising because webpack wasn't doing any code splitting and so would have been including all our routes in a dev bundle of about 15MB. I'm using I'm using Edit: I am seeing 304s when I activate verbose mode on snowpack, but I'm unsure why it doesn't show as a 304 in the browser. Is there anything else I can do to speed up loading a large application? |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Hi Adam. Exactly. I still suffer a bit from this problem but aggressive code splitting helped quite a bit. This really forces you to review all the modules that are getting loaded upfront and check if they can be postponed to a later time. After major refactoring I managed to reduce my initial requests from 600 down to around 140. In most tests I still end up loading around 300 files, which takes around 1-2 seconds but it feels like it gets exponentially worse after that point. I ended up using a locally modified version of snowpack to be able to send cache control headers for web_modules to prevent the browser from asking them, which helps a bit. I would prefer lower times but with HMR I don't need to re-load the entire page as often as before, which makes it more manageable. This is mainly an inefficiency in webkit, which should improve over time. Until then, code splitting is the best solution we have. That improved my performance for production as well as development. If that's not enough, esbuild/vite looks promising.
Thank you,Cagdas
On Thursday, April 29, 2021, 09:53:41 AM PDT, Adam Lovatt ***@***.***> wrote:
I'm seeing similar issues to those above.
With nothing cached, a fresh load of our app from the dev server takes 14 seconds to load 300 JS requests. Caching brings the load time down to 6 seconds, even if most/all of the modules come back with a 304.
Most of the time taken appears to be the browser loading and parsing each file so it can fire off the next requests, which makes sense:
If it is just the browser causing a bottleneck due to the number of modules being loaded, I imagine there's not much that could be done without getting into the world of batching responses, prebundling small chunks, etc, or maybe keeping a manifest that could be served up first so the browser can load dependencies without waiting for the individual modules to come in and request them. This would obviously start to interfere with Snowpack's "unbundled development" approach though.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
Beta Was this translation helpful? Give feedback.
-
Hello,
I love the development experience with snowpack but I have a few questions/issues/ideas.
Http2 does not seem to work in dev. All the files seem to be loading over https. I feel http2 would reduce load times. Is it a misconfiguration on my part? I set secure to true and https is working but http2 is not.
Would it be possible to use optimize plugin (minify) in dev as well? It would reduce load times. It only seems to work for build.
I love how efficient snowpack is once the page is loaded but since each module is a separate file my development loads up like 600 files on every page refresh during development. I don't need to refresh pages often but it would be great to reduce page load times. I noticed that browser asks the dev server for changes and server responds with 304s. Even if that's a short operation for a single file, 600 files end up taking 3-4 seconds. Would it be possible to use something like content hashes in file names and take advantage of caching? I know this would not be easy since it would require changing all imports on the fly but it would SIGNIFICANTLY speed up initial load times after page refresh.
Thank you,
Cagdas
Beta Was this translation helpful? Give feedback.
All reactions