MacBook, defective by design banner

title:
Put the knife down and take a green herb, dude.


descrip:

One feller's views on the state of everyday computer science & its application (and now, OTHER STUFF) who isn't rich enough to shell out for www.myfreakinfirst-andlast-name.com

Using 89% of the same design the blog had in 2001.

FOR ENTERTAINMENT PURPOSES ONLY!!!
Back-up your data and, when you bike, always wear white.

As an Amazon Associate, I earn from qualifying purchases. Affiliate links in green.

x

MarkUpDown is the best Markdown editor for professionals on Windows 10.

It includes two-pane live preview, in-app uploads to imgur for image hosting, and MultiMarkdown table support.

Features you won't find anywhere else include...

You've wasted more than $15 of your time looking for a great Markdown editor.

Stop looking. MarkUpDown is the app you're looking for.

Learn more or head over to the 'Store now!

Tuesday, March 31, 2015

NOTE: Some of the links aren't the full URL in text, but they are in href for the link tag. Shortened so they wouldn't extend the blog template.

Note to self on your recent MVC recommendations: Don't forget caching/data persistence. I think caching's the only thing the conventional library-crazy MVC overhead[1] gets you "for free" that's worth saving, but it is worth saving. Our cache was used an absolute ton in the big-ish system I hacked on in the previous job.
Need to look at something like these...
http://stackoverflow.com/a/349111/1028230 (is missing locking on write/null cache)
http://stackoverflow.com/a/21269643/1028230 (has locking)
http://deanhume.com/home/blogpost/memcached-for-c----a-walkthrough/62
http://www.25hoursaday.com/...ASPNETCachingVsMemcached....aspx
(hello, old and uncomplicated. Note that almost all of these have magic strings of some sort, which is Wrong.)
http://www.javalobby.org/java/forums/t48846.html
http://dotnetslackers.com/.../...Comparison-of-Entity-Framework-and-NHibernate-2nd-Level-Caching.aspx
http://docs.jboss.org/hibernate/orm/4.3/manual/en-US/html/ch20.html#performance-fetching (man, that's tuned towards large, complex, multi-associative (ha) entities)
... but within the context of this...
http://mvcbloggers.blogspot.com/2013/05/redis-vs-memcached.html
Guessing this probably ultimately wins:
https://github.com/StackExchange/StackExchange.Redis
Also interesting...
http://hawkins.io/2012/07/advanced_caching_part_1-caching_strategies/
https://signalvnoise.com/posts/3113-how-key-based-cache-expiration-works
... though with the second I was left wondering something similar to what this guy was...
http://stackoverflow.com/questions/10774738/key-based-cache-expiration
[1] That is, the benefits of a "typical MVC stack" like I was using previously: SQL Server to NHibernate with MemCache, Repository model to access NHibernate, QueryOver, and your typical MVC/MVVM setup on the other side. I want to kill off NHibernate, QueryOver, and use of the Repository model. The only baby in the bath is caching. Though, wow, caching is easier when you have a 1-to-1 and onto relationship between Views and queries. I'm not saying that makes things smarter, but it does reduce the caching complexity.

Labels: , ,


posted by ruffin at 3/31/2015 05:41:00 PM
Monday, March 30, 2015

I've been considering using AutoMapper in what should initially be a pretty simple MVC project that I may have to spin up in the next month or two, as I've got some experience using Automapper in the project I mentioned a post or two ago. I found a pretty good refresher in this CodeProject post, but was a little surprised to find this observation from its author:

I ran my tests many times and one of the possible outputs could be:

AutoMapper: 2117
Manual Mapping: 293

It looks like manual mapping is 7 times faster than automatic. But hey, it took 2 secs to map hundred thousands of customers.

Wow, HEH-lo. Not a big deal for simple pages, but probably not something you want underpinning the architecture of an app that could grow (which is to say "any app that calls for MVC").

A little googling showed atrocious Automapper performance isn't uncommon, not at all. Interesting quote from the accepted answer to the first question:

Also you have mentioned NHibernate in your question. Make sure that your source object along with its collections is eagerly loaded from the database before passing it to the mapping layer or you cannot blame AutoMapper for being slow because when it tries to map one of the collections of your source object it hits the database because NHibernate didn't fetch this collection.

In other words, as this article entitled "Stop using AutoMapper in your Data Access Code" explains...

Whilst I am a big fan of AutoMapper and use it in most projects I work on, especially for Domain to ViewModel mapping, when in comes to data access code, AutoMapper is not so useful. To put it simply, AutoMapper only works with in memory data, not the IQueryable interface which is more typically used in DAL scenarios. In the data access layer, whether we are using Entity Framework, LINQ to SQL or NHibernate, we often use the IQueryable interface, specifiying [sic] what we want to query before the OR/M engine translates this into SQL and returns our data. If you use AutoMapper in your DAL however, you are almost certainly returning more data than you need from the database, as the mapping will not occur until AFTER the original query has executed and the data has been returned. [emphasis mine -mfn]

Obviously there are ways around this, namely making sure that the query that pulls your data only returns what you want for that specific data load, but then you're right back to my complaints about using a repository in the first place. Once you're hand-rolling optimizations, you've left the realm of reusable generic code. Stop trying to backport a square peg into a round hole.

DevTrends links to a post by Bogard that says:

For a lot of read-only scenarios, loading up a tracked, persistent entity is a bit of a waste. And unless you're doing CQRS with read-specific tables [tables meaning "Queries, views, tables, all SQL-specific" - Bogart in comments], you're doing projection somehow from the write tables.

But many LINQ query providers help with this by parsing expression trees to craft specific SQL queries projecting straight down at the SQL layer. Additionally, projecting in to these DTOs skips loading persistent, tracked entities into memory. Unfortunately, we're then forced to write our boring LHS-RHS code when we drop to this layer...

Exactly. Though I'm not absolutely sure CQRS requires different sets of tables to gain the initial important architectural improvements I'm arguing for here.

No, I didn't know what CQRS was off-hand myself. It's apparently Command Query Responsibility Segregation. It's nice to see Martin Fowler essentially arguing the point from my previous post on ending conventional repository use for reads:

The rationale is that for many problems, particularly in more complicated domains, having the same conceptual [CRUD] model for commands and queries leads to a more complex model that does neither well.
...
The other main benefit is in handling high performance applications. CQRS allows you to separate the load from reads and writes allowing you to scale each independently. If your application sees a big disparity between reads and writes this is very handy. Even without that, you can apply different optimization strategies to the two sides. An example of this is using different database access techniques for read and update. [emphasis added -mfn]

That said, Fowler's not quite so contra-CRUD as I am, and seems to believe there are many real-world use cases for CRUD. "So while CQRS is a pattern I'd certainly want in my toolbox, I wouldn't keep it at the top." Really? Writing a lot of APIs maybe?

I just don't see using CRUD as the best, scalable route to build even a typical MVC app.

Though Fowler seems less CRUD-y than I am too in that he quickly jumps to divorce reads from your database of record by putting them into reporting databases instead, which seems like overkill if you're doing that from the start. That is, I think Fowler sees CQRS as a second step you take if CRUD lets you down. I think you should use CRUD from the start.

Just to be clear, I'm using CRUD with a bold "R" to indicate a conventional CRUD system, and CRUD (with a struck-through "R") for what I'm proposing everyone do from the start when making an MVC app where reads are all-but-always done with custom SQL and, in the context of Automapper inefficiencies, hand-rolled mappings to DTOs.

There's also an implicit argument in Fowler that the write database would have a different model than the reporting dbms. I don't know that the extra overhead of two domains, one for write and one for read, is going to be worthwhile. I can understand the reporting server being a sort of "permanent temp table with periodic (and ad hoc) updates" setup, but you've still got to base it on the data that's on your write side.

That is, I don't see how you break out of CRUD and entities, though, again, I want that entity business logic on the database first. If you optimize reads -- through, I propose, views and sprocs and maybe/probably temp tables, or, as Fowler seems to assume, some export process to a reporting database; it doesn't matter -- fine, but you're still basing that information on the content of your "CRUD" database setup.

Fowler's "hyper CQRS" with a reporting database is interesting, but, to me, moving to one or more reporting databases is a DevOps issue that's possible to insert well down the line, once you know reads are so out of proportion to writes that you need the support of another/distributed database servers -- a much easier move to accomplish in the future than ripping out an architecture based on Repository and automapping models. That is, you don't have to decide to use a reporting server when you hit File >>> New Project. You do need to decide not to get wrapped up with repositories and automapping.

Maybe we're similar things, just with Fowler giving more emphasis for the work as a unit rather than as many entities being affected as once. Just that, in my limited experience, optimizing writes (outside of batch-like use cases, but users seem conditioned to accept that they kick off batches and get notified when they're done) is rarely your primary performance bottleneck. Reads? That's what hobbles your systems seemingly anytime you're big enough that you're making money.

Getting back to automapping... The sum from the DevTrends post, above, is pretty good.

When hitting a database, as a developer, it is important to only return the data that you need and no more. When using modern ORM's, this is typically achieved by using projections when writing IQueryable queries. Typing this type of projection code can be tiresome and some people use AutoMapper instead, believing that it achieves the same thing. Unfortunately, AutoMapper knows nothing about IQueryable and only works with in-memory data, making it less than ideal in DAL scenarios. In order for AutoMapper to do its mapping, it needs to retreive all source data from the database, resulting in much more data being returned than is necessary, reducing performance and increasing database load and network traffic.

If your app has the potential to grow -- and let's just stipulate that any MVC app does -- you want to keep an eye on performance. And the more of this overhead you integrate into your architecture -- repositories, automapping in your data access layer -- the more tech debt you're going to have once that growth happens.

Anyhow, the tasks I was waiting on are done, so enough architecture discussion. Code time.

Bottom line: KISS now or you'll have debt to pay later.

Labels: , , ,


posted by ruffin at 3/30/2015 11:12:00 AM

Apparently [a quote by Tony Hoare](http://en.wikiquote.org/wiki/C._A._R._Hoare), the inventor of quicksort, speaking at the "1980 Turing Award Lecture":

"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult."

Caught that on the NVU.org website. Can't say I'm a huge fan of Kompozer, and I'm hoping NVu eats less memory than having a full Seamonkey open just for the editor. The quote's a nice find.

(In other news, I can't believe NVu 1.0 is nearly ten years old and we still don't have a great replacement that tends to follow Hoare's "first method". A good Markdown editor might be as close as we've gotten.)

Labels: ,


posted by ruffin at 3/30/2015 08:58:00 AM
Friday, March 27, 2015

I think I'm finally putting my foot down on the side against the repository pattern in MVC. I spend two years working on a reasonably complex MVC app that sold out almost completely to repositories, yet there were very few pages that didn't eventually require a DTO and, in the poorest performing pages (which eventually was nearly all of them), a custom QueryOver "query" to populate them.

Once you're using DTOs and not entities to populate your Views, and especially when you're using, in my past case, QueryOver (which means you *are* creating custom SQL, just one strange head rethread away from creating the SQL directly), you lose too much of the repository pattern's benefits. That is, once you turn the DTO corner, you're already pushing out a custom, usually one-off-ish solution. Why not simply write the right SQL and map from its results directly to your to DTO, skipping the insane overhead and obfuscated performance bottlenecks that are conventional entity mappers, especially when they're building implicit relationships between entities? Oh, the memory of and *for* an entity JOIN gone awry. ORM relationships are, if not difficult, *insanely* tedious, and are essentially never going to be as good as even your server's standard execution plan for the SQL equivalent. SQL's not difficult. If a coder can't understand sets, you were in BIG trouble waaaay before you got to the question of what pattern to use. Why move that logic anywhere but the server? And if you're going to take the time to hand roll relationships, why ever do *any* of that in entities, not SQL? It's like Dr. No trying to perform surgery.

There's only one time that entities are particularly important in the middle tier, though that "one time" crops up with *any* save: Validation, aka "business logic". You want to keep your validation process DRY, and that means putting a nice, reusable validation step between your users' newly inputted data and your data tier is absolutely crucial. You certainly don't want to support more than one filter for data going into the same place. That'll kill you when you're debugging, and cause bad data to seep in when you're not looking. And no matter how complicated your user interface and its initial SELECTs, you're always going to reduce what's entered or edited back to your table (or whatever other serialization method you've got) structure. That's just a truism. (Unless you're a crazy NoSQL-er. I get it, though if your data model changes considerably over time, well, godspeed, man.)

But would it be wrong to put validation logic only in the database too? It'd be a pain in the rear to have a set of "CUD" [sic -- CRUD without the Read] sprocs that return validation codes [that you'd look up and turn into human readable errors in your middleware] and related info, but you need to have some *complete* sort of validation in your dbms any way you cut it. Why get unDRY there? Too many people stop with validation in the middleware and trick themselves into thinking they're DRY. They're not only **not** DRY, their database is naked too. (Insert picture of Walter delivering the ringer... You didn't think I rolling out of here naked, did you?) As I was told years ago, "Put the logic in the database." I don't think I'm [just] showing my age there.

But at this point, you're very nearly back to the same ADO.NET pattern we started with years ago once you hit your Controllers, which worries me that I'm missing an obvious benefit.

But, honestly, for SELECT, repos are dead weight. Even ORM is dead weight. Good SELECT SQL, packaged into a well-protected sproc, mapped to a DTO seems like it's *always* The Right Thing to do. There's almost never a time when you're dealing with straight, unadulterated entities in complex pages, and even when you are, it's not like repos give you a huge leg up over smart Service reuse in your Controllers. That is, there's nothing particular to repositories that's a sole-source dealmaking advantage.

CUD is a slightly different animal, but for me the argument for entities here is more about convenience (running unit tests and coding great validation logic that's easy to send back to your end users with that validation only in your dbms is more difficult for Average Joe Programmer than, say, hacking it in C#) than Doing Things Right. I'm not sure I'm a big fan of ORMs and entities for CUD either, though, again, here I'm willing to be convinced I'm wrong.

Edit: See my discussion a little more recently on Command Query Responsibility Segregation, something I didn't know I was "rediscovering" here. The difference is I think you have to start with separation of your reads, something I kinda jokingly call "CRUD" in that post.

Labels: , ,


posted by ruffin at 3/27/2015 06:36:00 PM
Thursday, March 19, 2015

I always wish Humble Bundle would tell you more about the books they offer in their bundles without all the clicking. It's barely worth the time it takes to look up which of the books I might like -- at that point, I might as well just spend the cash on something I know I'd like. Why they don't get the info (descriptions and prices) off of Amazon or elsewhere when they put up a new bundle, I have no idea.

So I decided I'd do it this time for the current SciFi bundle. Here we go. No obvious gems (like the time an unpublished Frank Herbert novel was included in the "pay more than the average" bundle extras. Wow. Yes, please), but these are the books you get for a dollar five dollars (you also get a SciFi fiction magazine, Lightspeed, issue 28):

Book
Kindle Price
Amazon rating
The Rock by Bob Mayer $4.99
4.2  (91 customer reviews)
In Australia...a U.S. Air Force computer operator receives a terrifying radio transmission out of Ayers Rock that knocks out the world's communications. In New Mexico...a boozy college engineering professor is suddenly escorted into a waiting car by two armed military men. In New York...a tall, slender woman leaves her husband a note on the refrigerator saying that she's leaving -- but can't tell him where she's going. In Colombia...a Special Forces officer breaks into a drug kinpin's bedroom and puts a bullet into a woman's brain before running to a waiting helicopter. And in England...a beautiful twenty-three-year-old mathematician prepares for a journey that will change her life forever.

The team assembles in Australia on a mission that can save the world. But first, they must figure out who is the message coming from? What are they trying to tell mankind? And can it be deciphered in time before Armageddon overtakes the world?

Itโ€™s about a bomb, itโ€™s about a world about to be shattered. Itโ€™s about the last few days they have to save us all.

Bob Mayer is a West Point Graduate and a former Green Beret.
Fiction River: Time Streams (apparently a magazine)
$6.99
4.8  (5 customer reviews)
Time-travel stories open the entire world and all of time to writersโ€™ imaginations. The fifteen writers in this third original anthology in the Fiction River line explore everything from Chicago gangsters to Japanese tsunamis, and travel from 2013 to the nineteenth century to a vast future. Featuring work from award winners to bestsellers to a few newcomers whose time will come, Time Streams turns the time-travel genre on its head. "Fiction River is off to an auspicious start. It's a worthy heir to the original anthology series of the 60s and 70s. ... It's certainly the top anthology of the year to date." โ€”Amazing Stories on Fiction River: Unnatural Worlds
Parallelogram Book 1: Into the Parallel by Robin Brande
$4.99
4.6  (5 customer reviews)
Audie Masters is a 17-year-old girl who doesn't have it all: not a lot of money, not much confidence around guys, she can't seem to pass algebra, and she's desperate to get into the college she's been dreaming of for years.

But what Audie does have is a knack for physics that's about to catapult her into the adventure of a lifetime. 

Now all she has to do is survive it.
Time's Mistress by Steven Savile (apparently a short story collection)
$4.99
5.0 (3 customer reviews)
Steven Savile's popular fantasy stories embrace all aspects of the fantastic. Be it the wonder of magical realism, the darkness of the macabre, or the mythological, these stories have one thing in common: faith. Savile offers up tales of hope and wonder in equal measure, whilst treating sadness as a long lost friend. Nothing in his world is quite as it seems. The world you think you know isn't the world you're about to enter. Everything you think you've learned about life is about to be unlearned. These are stories of love. These are stories of loss. In some you will find redemption, in others the simple act of memory is treacherous and cannot be trusted. But in all of them there is an aching sense of loss and love. Savile's stories here speak to the part in all of us who still dares to fall in love again after a broken heart.
Alternitech by Kevin J. Anderson
$2.99
3.7, (3 customer reviews)
โ€œAlternitechโ€ sends prospectors into alternate but similar timelines where tiny differences yield significant changes: a world where the Beatles never broke up, or where Lee Harvey Oswald wasnโ€™t gunned down after the Kennedy assassination, where an accidental medical breakthrough offers the cure to a certain disease, where a struggling author really did write the great American novel, or where a freak accident reveals the existence of a serial killer. Alternitech finds those differencesโ€”and profits from them.

Well, it's useful to me. ;^) Probably worth a dollar. The last scifi bundle I got, Anderson's was the only decent book in the bunch (though I may have only spent a dollar), but it was obviously his first novel.

Labels: ,


posted by ruffin at 3/19/2015 08:21:00 PM
Friday, March 13, 2015

Very good, though oversimplified and not exactly chock full of programming examples, introduction to Big O notation here:

I said that the idea behind Big O notation is simple and useful. It's simple because it lets us ignore lots of details in favor of descriptions like "each conference attendee adds more work than the last" or "each one adds the same amount of work". And it's useful because it lets us answer questions like "can we actually solve this problem in a reasonable amount of time?"

Speaking of which, if you're a programmer, you probably use hashes (also known as dictionaries) all the time. Did you know that no matter how many items are in a hash, their lookup and insertion time never bogs down? It's O(1). How do they do that?

I finally learned what hashing really was last year (just out of curiosity) -- and duh, and brilliant, and duh. Hashtables take about 30 seconds to explain, if you don't read through the snippet from or, The Whale, as we like to call it, below.

Create a hashing function that produces a hash that's significantly smaller than the average length of the value being stored in your hashtable (md5 is fine, for example). Now use those as keys for your values on insertion.

So this line of text...

Whenever I find myself growing grim about the mouth; whenever it is a damp, drizzly November in my soul; whenever I find myself involuntarily pausing before coffin warehouses, and bringing up the rear of every funeral I meet; and especially whenever my hypos get such an upper hand of me, that it requires a strong moral principle to prevent me from deliberately stepping into the street, and methodically knocking people's hats off--then, I account it high time to get to sea as soon as I can. This is my substitute for pistol and ball.

... will always be found at location 0e92fa0b51b9f8879eae58ad30bc943c. ALWAYS. There's no look-up at all. Wow. Neat. And, in retrospect, duh. (Brilliance to me is always something that's so painfully obvious after you hear it that you can never again forget it -- literally life-[view-]changing on some level.)

Now by virtue of having fewer bytes in the hash than the value (ignoring that we could zip that text and have less already, etc etc), we will have collisions, where more than one value will have the same hash as another. Oh noes! How can we store the value if something's already at its index? Push it from the U[nsolved] drawer to the underused X files?

No no. If that happens in your hashtable, you simply string each content piece along in a list that you store at that key. Now your lookup time for collisions is slightly longer than straight O(1), but it's still so good it's barely worth worrying about. Read the above to know why that means it's still essentially O(1). Also now obvious: Why Hashtables don't guarantee order.

Again, hashing is brilliant. And, well, after you hear it, duh.

Labels:


posted by ruffin at 3/13/2015 02:48:00 PM
Thursday, March 12, 2015

If you're using Cscript to execute JavaScript on Windows...

This doesn't work:

"asdf"[0]

This does:

"asdf".slice(0,1)

This is not fun. It's one thing to learn idiosyncratic javascript engines in different browsers on different plats, but another when you're dealing with engines that just support whatever the heck they wanna. (He says without having checked the ECSAscript standard...)

I mean, Powershell is idiosyncratic like mad, but it's solidly idiosyncratic in consistent ways. Cscript might not quite understand that JavaScript is understood outside of Cscript context... (Why not just use Node? I would, but this is code we had delivered from YA Contractor, and it's Windows-objected-up all over the place.)

Labels: , ,


posted by ruffin at 3/12/2015 04:52:00 PM
Wednesday, March 11, 2015

You'd think that now, our third time venturing into the Apple Minimal Laptop, we'd have our reaction down. But we still act surprised -- see Joanna Stern on the new MacBook Thin.

Based on my short time with the machine, I think Apple will turn the laptop industry on its head (again), but just like with the original Air, it may be moving too fast for some.
...
We donโ€™t need all those other ports, Apple says. We are living in a wireless world now, where we can connect most of our peripherals without cords. Except every morning I plug my Air into a Thunderbolt display, and every afternoon I plug my iPhone into my Air to charge it.

Just caught Jason Snell on the Thin where he talks about writing for the Thin pretty much exactly what he wrote for the Air.

Well, let's do you one better: This is the same thing folks wrote about the original bondi blue iBook. When the Air came out, I penned a quick something for Low End Mac that Charles Knight brilliantly called, "Mother of the MacBook Air". You know, there's a good turn of a sentence or two in there.

On paper, the machine feels limited. Once you have one to use yourself, the feeling evaporates.
...
Initially, I thought the Air was an overpriced joke, a means of exploiting Mac users that are more interested in sexy and cool than practical tech specs. I've changed my mind.
...
If my experience with the iBook 300 is any indication, it and the Air are both cleverly designed, single-purpose road warriors whose limitations will be much more obvious to those who don't buy them than for those who do.

I think I still feel exactly the same. Though, if pressed, I think I'd buy a MacBook Pro 13" if I was buying today. ;^)

Labels: ,


posted by ruffin at 3/11/2015 05:25:00 PM

http://thetalecrafter.com/2011/10/07/on-pattern-hating/

Patterns are a poor substitute for problem solving.

Well said.

It's fun to check out the profiles of folks who post good answers to StackOverflow. Occasionally you can stumble over really interesting blogs.

Labels: , ,


posted by ruffin at 3/11/2015 03:43:00 PM

In Outlook Web Access' Settings, if you select "Always show From", a message with a grey background pops up that says...

Warning: Messages you compose won't be sent using S/MIME when this option is selected.

I have no idea why that matters. Does that mean you can see *and edit* the From line? That would be a little wacky, but even then, you just couldn't S/MIME if you edited it to an address that didn't match your key, right? That doesn't seem like a particularly difficult programming exercise.

I haven't been impressed with OWA yet. I have two accounts with it at work, and for the one that'll let me, I'm using ExQuilla and Thunderbird instead. OWA's web client is still entirely too rough around the edges.

Labels:


posted by ruffin at 3/11/2015 09:22:00 AM
Tuesday, March 10, 2015

And now, the reasoning behind the line breaks in Markdown, from the horse's mouth:

The implication of the "one or more consecutive lines of text" rule is that Markdown supports "hard-wrapped" text paragraphs. This differs significantly from most other text-to-HTML formatters (including Movable Type's "Convert Line Breaks" option) which translate every line break character in a paragraph into a <br /> tag.

When you do want to insert a <br /> break tag using Markdown, you end a line with two or more spaces, then type return.

Yes, this takes a tad more effort to create a <br />, but a simplistic "every line break is a
" rule wouldn't work for Markdown. Markdown's email-style blockquoting and multi-paragraph list items work best รขโ‚ฌโ€ and look better รขโ‚ฌโ€ when you format them with hard breaks.

I feel kinda like I'm arguing for JSHint over JSLint at this point (I WANT OPTIONS!), but this isn't code, it's composing content. (It's a distinction for me, anyhow. ;^D)

Wonder what the "hard-wrapped" use case is? Pastes from emails? Hard wraps in BBEdit? (That second one seems unlikely.) Editing content that already contains hard wraps always stinks. I can't imagine Gruber's doing that.

Labels:


posted by ruffin at 3/10/2015 09:21:00 AM

From readthedocs.org on Markdown:

To force a line return, place two empty spaces at the end of a line.

I guess that's better than nothing, but it seems like br's for every return would be a common enough request to be an option in most editors.

Labels:


posted by ruffin at 3/10/2015 09:16:00 AM
Monday, March 09, 2015

Is it just me, or is anyone else fed up with the number of pixels spilt covering this danged Apple Watch? I listen to a number of Apple-themed podcasts (Accidental Tech Podcast, The Talk Show, Upgrade, Core Intuition, Release Notes), and they wasted hour after hour talking not about how the Apple Watch might change the world, but about which version was the best, and who should buy, and if the bigger ones should be more expensive, and slow, painful rationalizations about why they were going to shell out for the watch above the Sports version.

The only podcast whose talk about the Watch hasn't been tedious banter over the cost of the Edition or about which stupid band's going to look best with what (and, if you can buy bands separately, you know, putting a nice band on a Sports watch is a great way to stick it to the man! What a loophole!) is David Smith's Developing Perspective, which thankfully has kept itself limited to why one'd want to program for the thing at all.

WHO CARES TO HEAR WHY YOU WANT A WATCH OR HOW MUCH THE OBSCENELY EXPENSIVE "EDITION EDITION" WILL COST?!!

What a freakin' set of First World Problems. The iPhone is neat. It's a computer in your pocket. Even though the iPhone 6 is a First World Problem, smartphones are not; they're even more ubiquitous than PCs. Heard an interesting Upgrade (iirc) podcast with a guy from the Khan Academy, who was trying to figure out how to put programming lessons on a featurephone.

Internet via mobile device is important. Which band you buy or how much gold is in the embarrassingly expensive Edition edition [sic] for Apple Watch 1.0 is not. I don't mind a quick discussion, but folks are obsessing over this meaningless cruft.

I don't mind if Apple moves towards an emphasis on style, but I do worry if the Apple developer and evangelist community painfully follows suit.

Labels: , ,


posted by ruffin at 3/09/2015 09:25:00 PM

Looks like I'm not going to be using Thunderbolt Display Mode with my MacBook 2015, if I order one. I don't see anything on the new MacBook's tech specs page supporting Thunderbolt display out, nor does the Thunderbolt Display page (the monitor in this case) list it as a "Thunderbolt-Enabled Mac Computer".

Labels: ,


posted by ruffin at 3/09/2015 02:55:00 PM

Man, I'd convinced myself to buy a new screen ($60) for my late 2009 MacBook to replace the screen that died last month. Other than the screen, it's been working surprisingly well, even coding with Xamarin Studio, now that I've installed an inexpensive 64 gig SSD. Kinda needs a new battery, but overall it's doing great.

But this new retina MacBook Air looks fantastic. Super small, retina screen, and no fan -- especially when my T430 has been driving me crazy with its fan for a while now. If the new Air's price is anything approaching sensible, I'm going to be hard pressed not to preorder the new MacBook immediately. Because there are only three sensible times to buy Macs to get the most out of your money: When a new model's released (and if you don't preorder immediately, you end up waiting weeks extra until yours shows up), when a new model first hits refurb, and when the old version that's been replaced hits refurb/clearance.

I bought a new iMac last year (refurb, natch) instead of a MacBook Retina in the hopes that I could code in the home office, bag a great monitor, and hold off on the laptop for a while. Now they're going to sucker me into buying two boxes in 15 months. I hate their marketing genius. ;^)

Labels: , ,


posted by ruffin at 3/09/2015 01:49:00 PM
Thursday, March 05, 2015

Sustainable Indie Software โ€” Helftone:

If things continue as they are [trending towards unsustainable 99ยข app prices], we are on the path to miss out on a lot of innovation. Large companies are not even interested in making software that grosses anything in the $100-200k range (that figure might only cover a single engineer's salary, not including additional taxes and costs like offices etc). But that sort of money is significant enough to have an indie dev work on your favourite app full time.

That's really the dream, isn't it? Presented realistically enough, but still the dream. (via Michael Tsai's excellent blog)

Labels:


posted by ruffin at 3/05/2015 11:22:00 PM
Tuesday, March 03, 2015

Today I find out, only 30 years too late, that Suspended (by Infocom) has a map.

http://gallery.guetech.org/suspended/suspended.html

Man, that would've been useful back in the 80s. I might have to try it again.

Labels: ,


posted by ruffin at 3/03/2015 09:06:00 AM
Monday, March 02, 2015

I've heard of "slamming" in the context of a company changing a phone customer's long distance company without their explicit consent. The switch from Google to Yahoo as the default search engine for Firefox seems to have happened in a similar way. Figured I'd post this screenshot (I may have already; sorry) before I forget I've taken it. I believe I have another from home that shows Yahoo in the search box as the new provider. or at least different behavior when you're not using Yahoo. I don't recall.

As I've mentioned before, there's no way Yahoo's gains in Firefox search sticks. It took me one search to get disgusted and swap back. I imagine others will have the same reaction with a little less immediacy. No offense, Yahoo, but Google has search down cold, as long as Superfish hasn't infected your box[1]. Better results, huh? By what metric is that measured? I don't even dislike Yahoo. I just don't think it's cool to slam users who are happy with whatever search provider they selected already.

I will back down just a touch (as I may have mentioned before) if this only happens if you never swapped search providers. My iMac at home and my work box are pretty new, and I don't recall if my MacBook or Lenovo acted slammed or not (I've flirted with Bing on both a few times over the years). But if Firefox swapped *everyone* to Yahoo, not just those who'd been using Google because it was the default, well, that's unavoidably a slam.

Anyhow, now with image. Yay.

[1] RLY Lenovo? I'm still getting over that idiotic move.

Labels: ,


posted by ruffin at 3/02/2015 04:48:00 PM

<< Older | Newer >>


Support freedom
All posts can be accessed here:


Just the last year o' posts:

URLs I want to remember:
* Atari 2600 programming on your Mac
* joel on software (tip pt)
* Professional links: resume, github, paltry StackOverflow * Regular Expression Introduction (copy)
* The hex editor whose name I forget
* JSONLint to pretty-ify JSON
* Using CommonDialog in VB 6 * Free zip utils
* git repo mapped drive setup * Regex Tester
* Read the bits about the zone * Find column in sql server db by name
* Giant ASCII Textifier in Stick Figures (in Ivrit) * Quick intro to Javascript
* Don't [over-]sweat "micro-optimization" * Parsing str's in VB6
* .ToString("yyyy-MM-dd HH:mm:ss.fff", CultureInfo.InvariantCulture); (src) * Break on a Lenovo T430: Fn+Alt+B
email if ya gotta, RSS if ya wanna RSS, (?_?), ยข, & ? if you're keypadless


Powered by Blogger etree.org Curmudgeon Gamer badge
The postings on this site are [usually] my own and do not necessarily reflect the views of any employer, past or present, or other entity.