MacBook, defective by design banner

title:
Put the knife down and take a green herb, dude.


descrip:

One feller's views on the state of everyday computer science & its application (and now, OTHER STUFF) who isn't rich enough to shell out for www.myfreakinfirst-andlast-name.com

Using 89% of the same design the blog had in 2001.

FOR ENTERTAINMENT PURPOSES ONLY!!!
Back-up your data and, when you bike, always wear white.

As an Amazon Associate, I earn from qualifying purchases. Affiliate links in green.

x

MarkUpDown is the best Markdown editor for professionals on Windows 10.

It includes two-pane live preview, in-app uploads to imgur for image hosting, and MultiMarkdown table support.

Features you won't find anywhere else include...

You've wasted more than $15 of your time looking for a great Markdown editor.

Stop looking. MarkUpDown is the app you're looking for.

Learn more or head over to the 'Store now!

Monday, April 27, 2015

Another interesting blog post brought to you a SO profile link:

War Time

It's easy to know when you're at war. There is a constant feeling of subtle panic every time you sit down. You leave the office drained, tired and quite often angry.

Here are some specific symptoms I've noticed from my career:

  1. You have several outstanding production issues that are non-trivial.
  2. You have abandoned quality in favor of quantity. (No unit tests, code review, etc.)
  3. Stakeholders are running amok. You are strongly encouraged to say "yes" to everyone.
  4. You are missing management to help shield you. You have to make managerial decisions, sit in meetings and develop politics.
  5. You are working extra hours to keep up.

There are of course more symptoms. Let's look at peace time to help emphasize the divide between the two:

Peace Time

  1. You have time to read articles about your industry โ€“ without feeling guilty.
  2. You have time to write unit tests.
  3. You have time to re-factor code per (2).
  4. You have time to think about names for objects, classes, etc.
  5. You have periods of focus that go longer than one hour. It's normal to have four interrupted hours of time.

After sitting back and remarking "how true" this is (though don't be fooled by #5 on his "peace" list; that doesn't mean you're not at DEFCON 2), it's more interesting to take home Ryan's main point: "The point is that as an organization, you need to know what state you're in."

That is important. Don't pretend you're 110% about quality when you're at DEFCON 2. Admit that you're cutting corners to make dates, and your team will be able to deliberately adjust. And if your company likes to live at DEFCON 2 all the time, don't give applicants the expectation and impression that your nuclear readiness oscillates. I've had jobs where the peacetime activities, for all practical purposes, never happened, unless you had #5 going from the list above (working extra hours).

I will say that I've noticed that a job description with the phrase, "Must be able to multitask well," means, "Must be willing to live at DEFCON 2 at all times." It's just a shame how often folks are at Ryan's "war" time not because they have to be, but because that's the culture management has Stockholmed themselves into.

Labels:


posted by ruffin at 4/27/2015 09:14:00 AM
Friday, April 24, 2015

Hours, the Apple Watch, and turning an app into a business โ€” The Hours Blog โ€” Medium:

How do you break into business and the enterprise? We like Slackโ€™s bottom-up approach. Start by making the best solution for individuals, who in turn advocate adoption for their team, who in turn evangelize to other teamsโ€ฆand up the chain it goes.

If this isn't already part of your indie business plan, even as just a potential end game, it should be. That's exactly the tack I was hoping to take. It's a long game -- become a household name, at least with the bleeding edge households, and then, slowly, have your fans champion your app at their businesses.

 That requires some planning, not the least of which is ensuring that your app has features that appeal to business users. And, admittedly, it's a stolen plan. Apple's the most obvious, but even Aeron chairs are probably executive-first finds.

But there's probably not enough money in the consumer market to power a large business based on most any software-first company short of games. Even _David Smith recently lamented that his company peaked a few years ago, and that he has to at least consider that the indie iOS ride might not last forever. If you want to make money, you have to go where the money is, and that, obviously, even ontologically, enough, is to target those entities whose primary goal is to create wealth: businesses.

Not sure how it'll work for Hours, nor am I sure going free and incurring the extra support load is best, but I will say that targeting business is the smart way to go.

Labels: ,


posted by ruffin at 4/24/2015 07:50:00 PM
Wednesday, April 22, 2015

From Wired (via SixColors):

Boeing 787 Dreamliner jets, as well as Airbus A350 and A380 aircraft, have Wi-Fi passenger networks that use the same network as the avionics systems of the planes...

Whoever made that decision should be fired. And if that person can't be identified -- heck, even if they can -- the manager should be sacked. And their manager as well. Idiotic. What's the price to set up two independent networks, honestly? And what percentage of the total price of the plane is that?

I mean, you have got to be kidding me. I saw the piece on 60 Minutes when they supposedly hacked a car remotely, which, honestly, even if the folks interviewed stretched the truth a little, I can still believe. That is, I don't expect across the board brilliance on cars. At times, I'm surprised they work at all.

But planes? Aren't we a little less worried about bleeding edge and more worried about safety? Maybe we should all fly around in A-10s, since they have "manual reversion mode", where you can fly without any hydraulics, much less networking, if it all goes to heck.

Honestly, as a programmer who always says if you don't have three copies of any digital artifact in three different places, you don't have a file at all, I'm surprised every plane isn't made like an A-10.

This is why I try not to fly.

Labels: ,


posted by ruffin at 4/22/2015 10:41:00 AM
Friday, April 10, 2015

From the Appbot's blog, "Dissecting The App Store Top Charts":

In my mind games have always dominated the App Store, both in downloads and revenue, but what is the truth?

This inspired me to dig into the US top 200 charts (free, paid and grossing) to check out how the categories and age of the apps compared. The data is a snapshot take on April 8 2015.

For me, the most interesting revelation was the make-up of paid apps:

  • 42% Games
  • 12% Photo and video
  • 11% Health and fitness
  • 6% Entertainment
  • 4% Each for Utilities, Business, Weather, Music
  • 3% Each for Reference, Education, Productivity

I think that's percentage by app, not any weight for cost. It's just the number of apps on the store. Still, hello telling.

Notice too what's fallen essentially completely out: In free apps, Social Networking is 12% of the pie. In paid, zippo in the 3% or above.

Of course, what'd be really useful would be how people buy, not what's on the shelf. There can be dozens of brands of cookies, but if 98% of folks are buying Oreos, I'm not sure I want to be in the fig newton market, if you get my meaning. I mean, it could just be that every new Objective-C homebrewer brews a game first, trying to be the next Flappy Clash Birds.

Labels: , ,


posted by ruffin at 4/10/2015 02:15:00 PM
Thursday, April 09, 2015

So far, I hate the new Photos for OS X. Three beach balls on startup (looks like every reviewer used an SSD), and it immediately imported iPhoto without asking. Thanks.

I deleted both photo libraries and started over. Eventually I created a new, blank (or so I hoped) library so I could drag in my photo folders manually. And I've chosen not to import photos. I already have them in folders. I don't need them twice.

Then, suddenly, I start getting about eight random pictures. What the heck? Where are these coming from? I didn't want them in there. I hide them, since I apparently can't delete them.


Then I choose to import a folder. It doesn't, afaict, recurse directories. Wth?

Now I've got more randomly found photos. What the freakin' heck is Photos doing? Who's running this ride? These are not the photos I tried to import just a second ago.

Okay, so I start over again. I delete the photo library. I put the new photo library that I create on next startup into its own folder. Now there's nothing. Let's see if I can drag lots of year folders over. I can, but Photos doesn't tell me anything. No, "I see your folders, and I'm importing now." Nothing. It sits there. It's still sitting there. It either grabs random photos I don't want, or it sits. Nice.

So I try again. ONLY NOW does it act like it knows I tried to import something a few minutes ago.


Beauteous. Just beauteous.

I read the iMore review that makes things sound pretty rosy. Ain't true for the "start from scratch" use case. Not yet.

This stinks. STINKS. I guess Picasa, which does actually do what I tell it, still wins.

EDIT: Oh, wait. I guess I was supposed to see this horribly informative "alert" to know importing was underway:



I call it, "The Universally Recognized Circle o' Importing".

And now my late 2014 iMac goes from super responsive to crawling, thanks to the spinning platters. I hate the way OS X is tuned only for SSDs at this point. Its performance really depends on them. I installed an SSD on a Late 2009 MacBook -- the unibody white one -- and it does great now. But my quad-core iMac? Crawling in molasses. Looks like Photo is creating all those thumbnails, which is going wonderfully.




Then this happened.
Great job, Photos. Guess I'll go finish my Node testing on my Lenovo.

Labels: , , ,


posted by ruffin at 4/09/2015 12:36:00 PM
Friday, April 03, 2015

As we continue to think aloud about MVC patterns... When you get rid of the Repository, you also get rid of the "sad tragedy" of repository architecture debate theater. If you want to see today's real lesson, go ahead and skip to the end.

CodeBetter --- DDD The Generic Repository

Consider the following code:

Repository<Customer> repository = new Repository<Customer>(); foreach(Customer c in repository.FetchAllMatching(CustomerAgeQuery.ForAge(19)) { }

The intent of this code is to enumerate all of the customers in my repository that match the criteria of being 19 years old. This code is fairly good at expressing its intent in a readable way to someone who may have varying levels of experience dealing with the code. This code also is highly factored allowing for aggressive reuse.

Especially due to the aggressive reuse the above code is commonly seen in domains. Developers are trained that reuse is good and therefore tend towards designs where reuse is applied

It bugs me that anyone could use "code reuse" as a positive when talking about repositories (but skip to the end to see what's really going on here). By definition, all this jive is repeated code -- or at least code that runs through a Rube Goldberg machine before it becomes SQL, which is worse.

Again, let me say again that I believe entities make some sense when you're looking to enforce business logic, but then I'm challenging you again to let me know why that isn't better handled -- ONCE! -- by your rdbms. Your entities are your data objects, and your capital-r Reads don't give a flying flip about them other than joining them together to produce their views.

CodeBetter -- DDD Specification or Query Object

One of the nice benefits of a Specification is that one could write some code like the following:

IEnumerable<Customer> customers =
CustomerRepository.AllMatching(CustomerSpecifications.IsGoldCustomer);

Writing code like this has allowed the developer to reuse a specification from the domain within their repository as a method for querying. While this may seem to be a good thing at the outset this mentality introduces a host of problems.

Performance

The first and largest problem that one will run into when dealing with this type of API is that the Repository is necessarily a leaky abstraction. The GoldCustomerSpecification is a piece of code, it represents a predicate for whether a single customer is or is not a gold customer. In order to return a set of customers that represents all of the customers matching the GoldCustomerSpecification the repository will need to run the specification on every customer. ... On the read side of your domain (a different layer if you use cqs) you want clients to be able to pass query objects directly to your repositories. Keep in mind that these are not the repositories on the transactional side (read: domain) but are supporting the complex reporting behaviors needed. It is often times not possible to completely isolate every type of report you may like to run (but you should still try to do this where possible as the strong contract has benefits).

CodeBetter -- CQRS and Event Sourcing
related: Martin Fowler -- Event Sourcing

If we were to say use a relational database, object database, or anything else that only keeps current state we would have a slight issue. The issue is that we have two different models that we cannot keep in sync with each other. Consider that we are publishing events to the read model/other integration points, we are also saving our current state with a tool like nhibernate. How can we rationalize that what nhibernate saved to the database is actually the same meaning as the events we published, what if they are not?

Ayende -- Repository is the new Singleton

There most commonly used definition for Repository is defined in Patterns of Enterprise Application Architecture:

A system with a complex domain model often benefits from a layer, such as the one provided by Data Mapper, that isolates domain objects from details of the database access code. In such systems it can be worthwhile to build another layer of abstraction over the mapping layer where query construction code is concentrated. This becomes more important when there are a large number of domain classes or heavy querying. In these cases particularly, adding this layer helps minimize duplicate query logic.

That's actually pretty interesting -- I mean, query repetition is very obviously the problem with what I'm proposing (a SQL query per controller action), but worded fairly well. Of course my response is that there's nothing wrong with a defensive separation of logic. Think smartly self-contained microservice.

PlanetGeek.ch -- What is that all about the repository anti pattern?

Complex queries should be placed into query objects according to his article. So do we really need a repository? This article tries to answer this question.

No. ;^D

SpiendWorks -- The Generic Repository Is An Anti-Pattern

A repository is a concept to abstract the access to the persistence, that is not to depend on data access implementation details. There is no formula and no rules. ... Other offender in regard to generic repositories is the fact that lots of developers just use it to wrap the DAO (Database Access Object) or an underlying ORM (like EF or Nhibernate). Doing so they add only a useless abstraction, pretty much just making the code more complex with no benefits. A DAO makes it easy to work with a database, an ORM makes it easy to access a database as an OOP virtual storage and to eventually abstract the access to a specific database.

Emphasis mine. Thanks for that line. Phew. Though I still dislike most ORM-based implementations, I think.

Moneyball for today

Also from the above link:

But the repository should abstract the whole persistence layer, hiding implementation details like database engine or what DAO or ORM the app is using but also providing a contract that makes sense from the application point of view. The repository serves the application needs, NOT the database needs.

Now we're getting somewhere, aren't we? THIS, not DRYness, is a repository's real advantage. And who the heck really swaps out the datastore of a mature app? Bueller? Then why abstract it?!?!!!1!

If you're not going to abstract the engine from the application, you don't use a repository. And if you want performance, you don't want to abstract the engine. Trust me.

That is, in brief, my bets are on SQL (though SQL is less important than your data persistence model -- and I'm leaving myself open to situationally microservice my way away from whatever persistence model I initially pick too), not the convoluted code overhead and repetition of Repositories.

If you're honest with yourself, you're very likely already are betting on [your code persistence model]. If you're not factoring your persistence engine into your code, you're almost certainly going to see performance problems at scale. That is, if you're "hiding implementation details like database engine or what DAO or ORM the app is using", you've already eliminated too many possibilities for optimization and made your codebase more difficult to maintain. Lose lose, man, lose lose.

Labels: , , ,


posted by ruffin at 4/03/2015 03:06:00 PM

Watching files grow to see when a process that's writing to them ends is a little like watching grass grow. But there are easier ways than ls -alF followed by arrow up, return, followed by arrow up, return, followed by arrow up, return... This is really neat -- if you add a -d for the first option (differences), you can get the changes in the command highlighted in realish time too, which is awesome.

http://stackoverflow.com/questions/18645759/tail-like-continuous-ls-file-list/18645991#18645991

You can use the very handy command watch

watch -n 10 "ls -ltr"

And you will get a ls every 10 seconds.

And if you add a tail -10 you will only get the 10 newest.

watch -n 10 "ls -ltr|tail -10"

So watch -d -n 10 "ls -ltr|tail -10" ftw.

Also neat was to learn a bit more about tail and how it is less "tail end of the file" as much as it is, "Put a tail on that file and tell me where it goes." Every time the file updates, bam, you get the lines that were appended. That's cool.

http://unix.stackexchange.com/a/45628/87389

You can use tail command with -f :

tail -f /var/log/syslog

It's good solution for real time show.

Labels: , , ,


posted by ruffin at 4/03/2015 02:00:00 PM
Thursday, April 02, 2015

Getting the text of existing sprocs is apparently pretty easy: EXEC sp_helptext N'sp_get_composite_job_info';

Voila.

I've been having trouble with ordering the results of a stored procedure, probably by putting results in a temp table. Seeing, in this case, the code to create the temp table the sproc's giving back should be useful.

Although, in my case, no dice. I ended up cheating and trivially rewriting the sproc and the sproc it called.

sp_help_job calls sp_get_composite_job_info, which ends with a statement with its own ORDER BY, which is all I was interesting in changing.

So a quick change there...

-- ...
FROM @filtered_jobs fj
LEFT OUTER JOIN msdb.dbo.sysjobs_view sjv ON (fj.job_id = sjv.job_id)
LEFT OUTER JOIN msdb.dbo.sysoperators so1 ON (sjv.notify_email_operator_id = so1.id)
LEFT OUTER JOIN msdb.dbo.sysoperators so2 ON (sjv.notify_netsend_operator_id = so2.id)
LEFT OUTER JOIN msdb.dbo.sysoperators so3 ON (sjv.notify_page_operator_id = so3.id)
LEFT OUTER JOIN msdb.dbo.syscategories sc ON (sjv.category_id = sc.category_id)
--!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
--ORDER BY sjv.job_id
ORDER BY name
--!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

... and a quick change in `sp_help_job`...

-- Generate results set...
--EXECUTE sp_get_composite_job_info @job_id,
EXECUTE WACK_sp_get_composite_job_info @job_id,
    @job_type,
    @owner_login_name,
    @subsystem,
    @category_id,
    @enabled,
    @execution_status,
    @date_comparator,
    @date_created,
    @date_last_modified,
    @description

... and I'm working. (Or I could have just overwritten spgetcompositejobinfo with one that sorted different, but that's obviously destructive, and usually A Very Bad Idea.)

Labels: , ,


posted by ruffin at 4/02/2015 11:23:00 AM
Wednesday, April 01, 2015

More fun thinking aloud about MVC architectures. After reading David Hansson on "Russian doll caching", I think I'm coming around on why you'd use entities to put together piecemeal views, though I'm not sure I'm buying yet.

I'll have to find the post again, but there was one in the links I put up yesterday that said that full page caching was caching's holy grail. Compare the full page mentality to how Hansson describes the issues of caching at serious scale:

This Russian doll approach to caching means that even when content changes, you're not going to throw out the entire cache. Only the bits you need to and then you reuse the rest of the caches that are still good.

This implicitly means that you're going to have extra cost piecing together every page, even if you're just stitching cached content, and the pseudo-formula to compare to CRUD is pretty easy to stub out. If the cost of rebuilding every cache that depends on some reusable subset of those cached views' information is greater than the cost of piecing together pages from incomplete/non-monolithic cache objects on each request, then you go with the [actually fairly conventional] "Russian doll" approach.

And this largely depends on how many of your widgets appear on more than one page where some [other] subset of the content churns.

The only way we can get complex pages to take less than 50ms is to make liberal use of caching. We went about forty miles north of liberal and ended up with THE MAX. Every stand-alone piece of content is cached in Basecamp Next. The todo item, the todo lists, the block of todo lists, and the project page that includes all of it. ... To improve the likelihood that you're always going to hit a warm cache, we're reusing the cached pieces all over the place. There's one canonical template for each piece of data and we reuse that template in every spot that piece of data could appear.

Still, it's easy enough to conceive of each of these reusable chunks as embedded views, and then you're back to where you started. Pages might be Russian dolls of views (though that's the wrong metaphor beyond expressing the Herbertian concept of "views within views within views". Once you understand views can be made up of views can be made up of views, ad infinitum, you then have to remember that any number of "dolls" can live at any level, rather than the Russian dolls' one-within-one-within-one. Perhaps your main view has five "dolls" inside of it, and those have 2, 3, 0, 1, and 0 dolls inside of them, respectively, and those have...), but then so what?

If you get to the point that one of your embedded views only takes data from one table, great. I guess the only way this is useful is if the same information appears more than once on a composite page of subviews. I still think you're often getting yourself to a specialized DTO for each view, and then you should have an equally specialized Read and mapping that populates that DTO. Unless the price of querying a cache for reused information across many views is less than the price of rebuilding each cache that would be invalidated when that information changes. And that's directly dependent on the number of pages you serve between cache churns.

That is, you can call it an entity, but I think it's more useful to call it a ViewModel. Stop mapping database tables to entities. Always read exactly what you're about to put onto the page directly from the database. That's what it's there for. Really. Smart folks are working hard to optimize your queries. I realize caching makes you think you've already got the data on hand, but your hand-rolled or, worse, ORM's automatic execution plan isn't, at some point, going to be nearly as good as stating what you need in targeted SQL sent to your real rdbms.

So, and I'm perhaps overusing Atwood's micro-optimization theater post a little, without a clear winner to the "monolithic refresh vs. stitched page composition" formula a priori, what's important to me is making the system easy to support. And then, certainly, CRUD is a heck of a lot easier than SQL>>>NHib/Caching>>>Automapper>>>ORM>>>Repository>>>MVVM.

(Worth adding that I'm unfairly equating Hansson with SQL/Cache/AutoMap/ORM/Repo/MVVM (SCAORM?) here. Totally unfair; he never says he's ORMing in these posts, afaict. I think the beef here is that he's serving modular pages, and I wonder if it's worth the extra complexity short of MAX SCALE!!1! -- and even then, when you get to displaying logically disparate information, we might be saying something similar anyhow.)

That's enough thinking aloud today. Way too many tedious box-watching style chores this week, sorry.

Labels: , , ,


posted by ruffin at 4/01/2015 12:21:00 PM

<< Older | Newer >>


Support freedom
All posts can be accessed here:


Just the last year o' posts:

URLs I want to remember:
* Atari 2600 programming on your Mac
* joel on software (tip pt)
* Professional links: resume, github, paltry StackOverflow * Regular Expression Introduction (copy)
* The hex editor whose name I forget
* JSONLint to pretty-ify JSON
* Using CommonDialog in VB 6 * Free zip utils
* git repo mapped drive setup * Regex Tester
* Read the bits about the zone * Find column in sql server db by name
* Giant ASCII Textifier in Stick Figures (in Ivrit) * Quick intro to Javascript
* Don't [over-]sweat "micro-optimization" * Parsing str's in VB6
* .ToString("yyyy-MM-dd HH:mm:ss.fff", CultureInfo.InvariantCulture); (src) * Break on a Lenovo T430: Fn+Alt+B
email if ya gotta, RSS if ya wanna RSS, (?_?), ยข, & ? if you're keypadless


Powered by Blogger etree.org Curmudgeon Gamer badge
The postings on this site are [usually] my own and do not necessarily reflect the views of any employer, past or present, or other entity.