title: Put the knife down and take a green herb, dude. |
descrip: One feller's views on the state of everyday computer science & its application (and now, OTHER STUFF) who isn't rich enough to shell out for www.myfreakinfirst-andlast-name.com Using 89% of the same design the blog had in 2001. |
FOR ENTERTAINMENT PURPOSES ONLY!!!
Back-up your data and, when you bike, always wear white. As an Amazon Associate, I earn from qualifying purchases. Affiliate links in green. |
|
x
MarkUpDown is the best Markdown editor for professionals on Windows 10. It includes two-pane live preview, in-app uploads to imgur for image hosting, and MultiMarkdown table support. Features you won't find anywhere else include...
You've wasted more than $15 of your time looking for a great Markdown editor. Stop looking. MarkUpDown is the app you're looking for. Learn more or head over to the 'Store now! |
|
Thursday, May 28, 2015 | |
Turns out gzipping and writing the now gzipped buffer to a file in node is pretty easy. Not sure why this took me so long to put together. Guess I'm still getting used to node's buffers and streams. And I didn't bump into something that showed how to gzip and write a buffer that's already in-hand to a file in node quickly either. Strange. Everyone uses ExpressJS, afaict. I wanted the crap web server I'm writing in node to be able to deliver gzipped content, and thought the neatest way to do this was to check if a gzipped copy existed already for a requested file (of the right types -- html, js, and css). If not, I deliver the raw/uncompressed version initially and asynchronously fire off a request to start the compression for next time. There's also, obviously, some logic to see if the original is newer than the gzipped version, etc etc. I'll skip all that for now as I straighten it out, but will probably push to npm in a week or two. I really am [currently] worried all the overhead for each request (parse file paths, get stats on two files with protected/try blocks, and compare modified dates if they both exist) is going to kill much of the advantage cached & compressed files provide. Should test with some significant load, I guess. But the actual gzipping isn't bad if you use the zlib
The reason I'm using a buffer if because I've already read and returned the raw version of the requested file, uncompressed, to the most recent requestor. There's no reason to make them wait until the cached copy is ready. They get the original now. But then I've got that buffer sitting around, and there's no reason to read the file twice... (The reason I'm not using ExpressJS to serve static content is that I'm always wary about code you haven't vetted, and this didn't seem like a horrible task when I started. I'm playing back and forth about writing a dependency-less-ish version of this server, and then later adding a version that allows 3rd party dependencies that could use Express (etc) instead of the hand rolled stuff if it's installed. But there's so much overhead in Express... Find where it looks up content types, for instance. You're going to have to travel through three or four dependencies until you end up at the source. And it's not tuned for lookups, I don't believe. Not that it's a huge deal, but Express's minimalist claim? Thhhbth. I mean, I'm sure I'll figure out I've bitten off too much reasonably soon, but right now a focused server targeting delivery of single-page apps (static + tons o' JSON) seems like a doable smart idea. Anyhow, I didn't take long before I was serving up "routings" (where the server parses the URL to see if maps to a registered function) and static files when no routing rule matched (if the static files existed). I'm not sure why I let myself get distracted by gzipping, other than it seems you oughta have it if you want to pretend you have a web server. /sigh) Labels: expressjs, javascript, node posted by ruffin at 5/28/2015 04:15:00 PM |
|
| |
All posts can be accessed here: Just the last year o' posts: |
||||||||||||||||||||||
|