|
title: Put the knife down and take a green herb, dude. |
descrip: One feller's views on the state of everyday computer science & its application (and now, OTHER STUFF) who isn't rich enough to shell out for www.myfreakinfirst-andlast-name.com Using 89% of the same design the blog had in 2001. |
||||||
|
FOR ENTERTAINMENT PURPOSES ONLY!!!
Back-up your data and, when you bike, always wear white. As an Amazon Associate, I earn from qualifying purchases. Affiliate links in green. |
|||||||
|
x
MarkUpDown is the best Markdown editor for professionals on Windows 10. It includes two-pane live preview, in-app uploads to imgur for image hosting, and MultiMarkdown table support. Features you won't find anywhere else include...
You've wasted more than $15 of your time looking for a great Markdown editor. Stop looking. MarkUpDown is the app you're looking for. Learn more or head over to the 'Store now! |
|||||||
| Thursday, May 28, 2015 | |||||||
|
Turns out gzipping and writing the now gzipped buffer to a file in node is pretty easy. Not sure why this took me so long to put together. Guess I'm still getting used to node's buffers and streams. And I didn't bump into something that showed how to gzip and write a buffer that's already in-hand to a file in node quickly either. Strange. Everyone uses ExpressJS, afaict. I wanted the crap web server I'm writing in node to be able to deliver gzipped content, and thought the neatest way to do this was to check if a gzipped copy existed already for a requested file (of the right types -- html, js, and css). If not, I deliver the raw/uncompressed version initially and asynchronously fire off a request to start the compression for next time. There's also, obviously, some logic to see if the original is newer than the gzipped version, etc etc. I'll skip all that for now as I straighten it out, but will probably push to npm in a week or two. I really am [currently] worried all the overhead for each request (parse file paths, get stats on two files with protected/try blocks, and compare modified dates if they both exist) is going to kill much of the advantage cached & compressed files provide. Should test with some significant load, I guess. But the actual gzipping isn't bad if you use the zlib
The reason I'm using a buffer if because I've already read and returned the raw version of the requested file, uncompressed, to the most recent requestor. There's no reason to make them wait until the cached copy is ready. They get the original now. But then I've got that buffer sitting around, and there's no reason to read the file twice... (The reason I'm not using ExpressJS to serve static content is that I'm always wary about code you haven't vetted, and this didn't seem like a horrible task when I started. I'm playing back and forth about writing a dependency-less-ish version of this server, and then later adding a version that allows 3rd party dependencies that could use Express (etc) instead of the hand rolled stuff if it's installed. But there's so much overhead in Express... Find where it looks up content types, for instance. You're going to have to travel through three or four dependencies until you end up at the source. And it's not tuned for lookups, I don't believe. Not that it's a huge deal, but Express's minimalist claim? Thhhbth. I mean, I'm sure I'll figure out I've bitten off too much reasonably soon, but right now a focused server targeting delivery of single-page apps (static + tons o' JSON) seems like a doable smart idea. Anyhow, I didn't take long before I was serving up "routings" (where the server parses the URL to see if maps to a registered function) and static files when no routing rule matched (if the static files existed). I'm not sure why I let myself get distracted by gzipping, other than it seems you oughta have it if you want to pretend you have a web server. /sigh) Labels: expressjs, javascript, node posted by ruffin at 5/28/2015 04:15:00 PM |
|||||||
|
While reading up on how to properly respond to
No, I'm not saying you necessarily need to code around the foibles of a now [thankfully] fairly rare version of IE. What I am saying is that this is a rookie mistake that made it out of QA at one of the best software engineering companies (particularly in 2002-2003) there is, and in one of its marquee products. Anyone who has used streams has run into issues with buffers similar to this one. This is, at worst, a slight variation on theme. Though I'm not suggesting you forgive bad code either, it is slightly comforting to know that, at least 12 years ago, Microsoft let something so [okay, potentially anachronistically] glaring slip. posted by ruffin at 5/28/2015 09:12:00 AM |
|||||||
| Wednesday, May 27, 2015 | |||||||
|
Got a meeting request that borked Outlook, so I started a-googlin'. Here's the easily found Microsoft support article:
How do you fix problem 1? Oh, so glad you asked! It's easy!
Or, if it's problem 2., it's even more fun! You can go read an RFC.
Wow. The worst part of this is the date of the last update to this support article.
These don't seem like particularly difficult thing to kludge around. How is it that such an easily definable set of edge cases can't be kludged into Outlook 2010 at some point in the last three and a half years? Are there really that many more important issues? Sounds like this could be fixed in a good weekend of work, plus a few hours of QA. Release it as an unsupported script or translation utility, for heaven's sake. /sigh Outlook fail. Update: ExQuilla imported the ics file, no problems. Labels: outlook fail posted by ruffin at 5/27/2015 04:23:00 PM |
|||||||
| Thursday, May 21, 2015 | |||||||
|
This is from VMware's community site. posted by ruffin at 5/21/2015 02:25:00 PM |
|||||||
| Friday, May 15, 2015 | |||||||
|
Ah, the elusive dependency-free datepicker.
I'm starting to hook up the html side of my node app, and I'm trying to stay pretty 3rd party library lean. But there's no reason whatsoever to reinvent the date picker. Hope it actually understands dates, though. Dates aren't fun in any language. (Okay, well, actually, dates are lots of fun, and I've got a SO question on date diffing that I really would like to have enough time to return to. One of the answers posted after mine set up some unit tests that it ran against each existing answer, and my answer, though performing reasonably well, fails a few. (My answer was about as good as Skeet's at the time (failed and passed about as many tests, iirc)! I think Skeet was grossly overcomplicating the process, but it's been a while, and I've got to go back to check when I have some real time), fails a few.) Labels: javascript posted by ruffin at 5/15/2015 03:29:00 PM |
|||||||
|
So though it's completely unfair to characterize a technology stack based on the poor implementations built on top of it by 3rd parties, I completely get what this guy is saying:
He's comparing that to using packages on Node. Here's his Node sum:
I just started fishing around in The nice part about Node is that it's very Linux-on-the-desktop-y, in that simply being a Linux user means you're willing to accept many things your standard workstation user would not. Node usage presupposes a few things whose importance we might underestimate: *Users are familiar with the command line. *They know JavaScript well. * They don't mind basing their livelihood on an open source library. * Every node app, at least the node part, is headless/UI-less. That's a pretty select group of folks, and one I'd rather work with than the guys who need point-and-click admin interfaces (not that everyone who uses MS does, but those who do need the hand-holding are largely welcome there [1]) who think SSRS is the way to create web interfaces for their reports (and sympathy to anyone whose job forces them to use SSRS. You know, node and MS SQL really aren't that bad together...). [1] I had MS SQL Server training years ago, and wow. The guy I was working with did almost everything from the command line, it seemed, or at least could, and would if it was easier than the GUIs, so that's how I was learning to do it too. But man, there were tons of people in the course who only knew how to run a SELECT by right-clicking a table in what's now SQL Management Studio and selecting the SELECT options from the context-menus there. Last month, I took a training course on administrating VMware's vCOps/vROps. Same deal. Though you could use PowerCLI to pull out all of these metrics and then pretty much push them wherever you wanted, the course was all about how to left & right-click your way through wizards with exceptionally klunky UIs to make management-friendly "dashboards" whose metrics those admins might or might not actually understand. Labels: node posted by ruffin at 5/15/2015 08:41:00 AM |
|||||||
| Wednesday, May 13, 2015 | |||||||
|
Data on SSDs powered off die quickly, summed by Michael Tsai. Key in on the /. post's use of "unverified" in "unverified backups", which implies (explicitly says?) you have to tend your backups whether they're of old data or not. Quoting from MacWorld:
Quoting from from a Slashdot user, no less (long time no see, /,):
Hadn't really given data's lifespan on SSD much thought. Kinda makes you miss EPROM rot, which seems glacial by comparison. I guess I should expand my adage about digital data from, "If you don't have it in three places, you don't have it at all," to, "If you haven't verified you have it in three places in the last three weeks..." Also means that the way I kinda store old laptops as "backups" needs to change, and quickly. I haven't retired an SSD powered computer yet, but it looks like it'll be a more complicated process when I do. The weird thing about "live" digital data (where "live" means " I wonder how degraded my home movie VHS tapes are... YouTube posts of old VHSs aside, our magnetic (and now even moreso with solid state) legacy might be much more ephemeral than film and paper have trained us to expect. posted by ruffin at 5/13/2015 11:43:00 AM |
|||||||
| Monday, May 11, 2015 | |||||||
|
I inherited some code as part of a Rube Goldbergian file-to-db process, and it has a step that uses JavaScript on Windows to parse csv files. Instructions said to use WScript to run it [sic]. Boy, that was fun. When I decided to push all the debugging into a central function that also output the info to Quick SO answer to the rescue. Quick sum: Run the script with Side note: You can't use Labels: javascript, noteToSelf, windows posted by ruffin at 5/11/2015 10:52:00 AM |
|||||||
| Saturday, May 09, 2015 | |||||||
|
Apparently The Right Way includes adding to an exclude file, but if you just want to "assume unchanged" forever... If you need to ignore local changes to tracked files (we have that with local modifications to config files), use git update-index --assume-unchanged [ Labels: git, noteToSelf posted by ruffin at 5/09/2015 02:09:00 PM |
|||||||
|
Note to self: Look inside your new album's cover. There's a CD in there. It'd be nice to find that before you hook your turntable up to your MacBook, import and split into tracks with Audacity, and convert to mp3. This worked not only with the new Halestorm album I picked up last weekend, but also the Rich Robinson album I picked up months ago, but hadn't gotten around to importing. Glad I thought to look. After I'd already ripped both Halestorm records and noticed the CD when I put them back. /sigh Labels: formats, music, noteToSelf posted by ruffin at 5/09/2015 01:39:00 PM |
|||||||
| Friday, May 08, 2015 | |||||||
|
Edit: Great wrap up of Redacted by Michael Tsai (who is without peer at link sniping these things). And some quotes there are as sharp or sharper than mine:
So as any good, RSS-equipped indie dev wannabe knows, Sam Soffes blogged about the release of Redacted yesterday. I've watched the video, I've seen the website. I agree with you. It wasn't a great launch, and it isn't a particularly great app. I can pixelate all I want with Skitch for free. And making black boxes on top of images? RLY? How long have we been able to Windows-R, mspaint? Heck, you can make white ones in Preview by selecting and cutting. I bet Soffes wouldn't argue any of that. He certainly doesn't argue that it was a bum launch.
Dan Counsell would not be proud. I'm all for supporting the village toymaker, but Redacted isn't exactly hitting a pain point for me, so I doubt it's aspirin for other OS X users that heard about the app either. It's the results, stupidBut our take home today isn't that the launch wasn't great or the toy isn't particuarly novel. It's that this cruddy launch of a cruddy app DOMINATED THE APP STORE!!1!.
Let's let that do the proverbial sinking in. This barely tweeted app was #8 overall paid in the US Mac App Store. That's great! Wow, look what a blog and twitter account bags you, right? What a blog and a twitter account (and #8 top paid on US Mac App Store) bags you...
Wow. #8 in the US == 59 sales, at least on May 7th, 2015. And that bagged him $302. If he stayed #8 all year he makes $110k. (Hint: He's not going to stay #8 all year. That said, by following the new, "I'll share my numbers with you indies, and then you'll buy my app," marketing plan, he's #2 currently. Sounds like he's getting $1000 today. See what a little marketing does for you? Dan's happier now. Not happy, but happier. ;^D) Who is Sam, and what's the lesson for indies?
I like the way he quietly slid his move back to full-time employment in there, but I'm not sure that's the right reaction. This is a guy who, about a year ago, made his goal to retire a year from now, at 25. Probably not going to do that at the 9-to-5. This is also a guy who, just over seven months ago, said:
That's just sad, man. And it's pretty common, I think. I follow a recently pretty quiet blog from White Peak Software. The guy hit it fairly well at least twice, once on the Mac before iOS, and once during the iOS gold rush. And he's been working as an indie (plus contracting) for 10 years. But he talks about a goal of making $700-1500 a month on software alone at one point, and apparently isn't hitting that goal. That stinks. I'll probably blog more about this later, but the lesson seems clear: Go big or go home. Target business users somewhere, somehow in your business plan. News flash, niche seekers: You can't help but shoot for a niche, but, oxymoronically, shoot for a big one. Don't go out of your way making your ideas more niche. Simply find it. And don't expect any single marketing trick (perhaps excepting the aforementioned, "Here are my numbers; buy my app" trick) to push you into the black/green/what-have-you. And don't expect to retire at 25 if you're the village toymaker. It could happen. It shouldn't be your goal. Let me be clear: I really appreciate folks blogging about what's essentially failure. Soffes hasn't failed yet, but he was on his way if he just left Redacted on the Mac App Store pile without any more marketing. You could read what I'm doing here as a pretty hard slam. It isn't, though it's not slam-free. Redacted has to be a side project for him. That is, I don't get the feeling this was the equivalent of Sinclair working on Unread: I began work on Unread at the beginning of July 2013. I spent about six weeks on the overall design of the app, then plunged headfirst into Xcode, not coming up again for air until the following spring. I estimate that I worked sixty to eighty hours a week every week from July 2013 up until the launch of Unread for iPhone Version 1.0 in February 2014. Getting several thousand dollars from a for fun side project ain't failure, man. It doesn't mean go back to Cubeland. Though getting $300 for being #8 Top Paid in the US might be. Reminds a little of Charles Perry's fable of "Pareto Distributions and the Long Tail": Luckily, thereโs a lot of money to be made in that long tail. At the top of the long tail, in position 871 on the U.S. Top Grossing list, an app still makes over $700 in revenue per day. Thatโs almost $260,000 per year. Even number 1,908 on the U.S. Top Grossing list makes over $100,000 per year. I sent Perry an email after that post, and he was exceptionally kind with his time and replied, in detail. He's, imo, stuck on seeing the glass half full. Fair enough. But after hearing a number of people say targeting the Mac, where people will "still pay real prices" (I think Daniel Jakut, among others, has said something to that effect, though that's where his bread is buttered already), I'm not sure if the tail on OS X's store (not including straight sales, natch) is as long, and I wonder if the iOS App Store's is as fat as we thought. I can agree strongly with Charles on one thing, though, that I believe he said on Release Notes: We need to grow the pie. Soffes' big project was supposed to be Whiskey, a Markdown editor. I'm using a great, free Markdown editor right now -- MacDown. I don't see a single must-have, Markdown editor-evolving-feature on Whiskey's feature list. He's got to get better ideas, man, or he's got to gain a larger following. Note that I didn't say more original ideas. Just plain old, solid, "invest in gold" kinds of ideas. And going back to work full-time is a sort of failure within the goals Soffes' expressed for himself. Though there's still hope: Soffes tends to fall in and out of full-time work. It'll be interesting to see what happens in a year to 18 months. Labels: app store, app store econ, apple, business, indie posted by ruffin at 5/08/2015 09:54:00 AM |
|||||||
| Thursday, May 07, 2015 | |||||||
|
So I'm unhappily debugging some inherited Python code, a language I've never used before, that's causing trouble (it's comparing .csvs, and was using giant dictionaries to keep all the rows from the first in memory, removing keys when a matching "id" column was found in the other file. But did it anticipate repeated keys in multiple csv rows? No. No, it apparently didn't), and got the attached "results" from googling for "python try catch". Hello, surreal... And, as you can tell from the second picture, if you choose to play, after the search results are replaced by a command-line environment, things get a little Hideo Kojima-y. I remember once playing the original Metal Gear Solid with a buddy and running into Revolver Ocelot (?) in a boss battle, and he said something about how long it'd been since we'd saved, and that we weren't allowed to save now. I'm not even sure he was telling the truth, but we didn't try to save, and the battle was that much more exciting because of it. So "This invitation will expire if you close this page," certainly convinced me into going down the rabbit hole. I mean, even if this is some sort of strange, elaborate hack, who wouldn't? I'm your huckleberry. Turns out there's a file in my "home" directory called The worst thing? I completely foobared the Google/foobar. What a freakin' idiot. I did sort of figure it was an application for employment, and I couldn't've done worse. The first problem wasn't tough, but I did my usual "race through and figure out errors later" and obo'd all over the place, which was obviously the point of the problem. I got it, but it took about 17 minutes of setup and fixing dumb errors. Idiot. I'm sure they see every failed Oh well. I logged in to save and guess I'll try the remaining tasks later. Not that I'm even looking for a job. Not that it's particularly ethical to interview me for a job while I'm working in my current position without telling me that's what we're doing. But that was fun, in a way. This is what I get for searching to fix a Python issue! Programming languages are all "different dialects of the same language", a prof once told me. He was right, but it's strange what googling those dialects can do to you. Anyhow, more about what's going on with Google/foobar here. And yes, it turns out it was an interview. Dang it. Still, from debugging Python in VIm via ssh to javac-ing an answer in 17 minutes isn't horrible, I guess. ;^) posted by ruffin at 5/07/2015 09:59:00 AM |
|||||||
| Wednesday, May 06, 2015 | |||||||
|
I've got a post on how to replace ^M inline, but here's another option for VIm v7.2.40+, as detailed in vim.wikia.com:
To save to DOS format, just skip the I hadn't seen the The "magic" comes from this, which I wouldn't've guessed:
Labels: noteToSelf, vim posted by ruffin at 5/06/2015 09:36:00 AM |
|||||||
| Tuesday, May 05, 2015 | |||||||
|
I get tired of reading things like this post about memory:
"A little bit" gets used? Look, things go bad well before you hit zero in Available. I don't know if it's coincidence, but if I've got a box that's getting sluggish, I'm always near zero in Free, not Available. In spite of the fact that I can't find a good source describing what's going on, I really don't think it's coincidence. I understand that Available is supposed to be RAM that's ready for use, but if it's not needed, is supposed to be able to be overwritten quickly. But it's still "on deck", and it's still a limited resource. Let's say you've got zero Free. That means you've only got Available, stuff that likely will be used but could be overwritten if needed. My understanding is that the OS often tries to guess what you'll need and pre-loads that into Available. Maybe that's right, but it doesn't really matter. Either way, available is stuff we think it's likely we'll need, but could overwrite if needed. And if we have zero Free, we can't add anything else to Available. Note that the y-axis goes to 50 But if what's "really" needed is something the OS wrote to disk a while back and the OS didn't "guess correctly", what happens? We read from disk, and push that jive into Available. Reading from disk is a slow process. If we had enough Available to hold all the possible guesses for all of our apps (I know, that's nearly "unlimited Free"; humor me, but it's also my point), forced, time-sensitive reads from disk to push into Available wouldn't happen. We'd have no platter lag. But the more apps we have sharing Available, the less of a chance the OS has guessed correctly. Each incorrect guess (I'm guessing) costs us performance. If we have Free RAM open, well, that tells us the OS doesn't think it has an extra guess worth taking. In other words, in that situation, Available is "exhaustive". Guessing is at its lowest cost when gobs of Free is available. I don't care if there's a 1% you're going to use it, there's Free RAM. Load that possible usage from disk to Free, making it Available, as soon as you get a chance. So if Free is 0, you're less likely to have what you'll need next in Available, right? And if you don't have what you need in Available, you're going to read from disk, and that's slow. It's like playing Pick 'Em in the lottery. If there are 100 possible numbers and you have 100 tickets, unless you're an idiot, you win. If you only have 10 tickets, well, now each ticket is much more important. And if a number you didn't guess comes up, it's going to take some time to erase your original guess. ;^) Even if I'm totally off base, you're not going to be able to convince me Free is crucially important. Every time my box goes sluggish [1], I open Task Manager and I'm out of Free. If I have Task Monitor open and things are running well (and But don't tell me, "Available is the only one that matters," like "logicearth" does here. That's just patently false. I want my RAM to be Free, man, and you should too. [1] Sluggishness has only been at work, counterintuitively. I have lots more RAM in my boxes at home, and OSes that seem smarter about its management (OS X 10.9+ and Win 8.1). And sluggishness has been "at work" at two companies in a row now. Look, folk, you pay me too much to have me waiting on my box for a minute at a time when 16 gigs of RAM is $200. I shouldn't have to requisition it. Max out the box, and throw in an SSD. Smartest $400 you'll spend. posted by ruffin at 5/05/2015 10:33:00 AM |
|||||||
|
| |||||||
|
|
All posts can be accessed here: Just the last year o' posts: |
|||||||||||||||||||||
|
||||||||||||||||||||||
|
|
|
|