|
title: Put the knife down and take a green herb, dude. |
descrip: One feller's views on the state of everyday computer science & its application (and now, OTHER STUFF) who isn't rich enough to shell out for www.myfreakinfirst-andlast-name.com Using 89% of the same design the blog had in 2001. |
|
FOR ENTERTAINMENT PURPOSES ONLY!!!
Back-up your data and, when you bike, always wear white. As an Amazon Associate, I earn from qualifying purchases. Affiliate links in green. |
|
|
x
MarkUpDown is the best Markdown editor for professionals on Windows 10. It includes two-pane live preview, in-app uploads to imgur for image hosting, and MultiMarkdown table support. Features you won't find anywhere else include...
You've wasted more than $15 of your time looking for a great Markdown editor. Stop looking. MarkUpDown is the app you're looking for. Learn more or head over to the 'Store now! |
|
| Tuesday, June 30, 2015 | |
|
Well, all I needed to get *up and running* on JSP and Servlets (current project might be switching stacks) is here. Very good video package so far. Very basic, but thorough, which makes remembering all this stuff that I haven't used in probably 10-11 years pretty simple. It's all different dialects of the same language, but it's useful to have a primer before changing regions. I just heard someone on a Mac podcast complain about an irrational hate of Java. I don't get it. Java is a good language, other than the ivory tower syndrome that infests many of its stock objects. There's a reason Microsoft stole a lot from Java when they put together C#, to the point that I'm happy working in either for faceless code. Maybe Objective-C users are prone to another syndrome, Stockholm. Labels: java, noteToSelf posted by ruffin at 6/30/2015 11:39:00 AM |
|
|
From CNet:
Is it just me, or is that number waaaay under where you would've expected it'd be? I try to tell folks to use Siri for directions, which it's pretty good at providing, and which seems to, surprisingly, be a difficult thing for folks to do on their own "by hand". I can almost get it to text for me too, especially when I'm plugged in in the car. "Hey, Siri. Text [pseudo Siri-phonetic pronounciation] blah blah some message blah." But half of iPhone users not even playing with Siri once a month? That seems like a fail. I wonder how many iPhone users in that survey still have iPhones that can't use Siri. It can't be many. Forty-two percent, though a famous number, here is a real fail. That said, the point of the CNet piece -- that Siri will be AppleMusic's differentiator -- is interesting. Their example "play the top 10 alternative songs now" is actually pretty compelling. It's Pandora stations with potentially static, user-defined rules on demand. That's pretty cool. Of course, see why I think Apple's (naturally?) moving to streaming music to keep your grain of salt handy. posted by ruffin at 6/30/2015 10:16:00 AM |
|
| Saturday, June 27, 2015 | |
|
Why does Apple like streaming music? Because they get to trade on-device storage for cellular bandwidth. And I don't say that because they want to skimp on device-based storage. I'm saying that because folks that can take advantage of unlimited streaming pay for high bandwidth plans, and high bandwidth plans are expensive. You want to keep phones high[ish] end devices that carry status, and right now being able to stream means you've got cash to burn on frivolous cellular bandwidth. (Yes, I said frivolous. I'm old, I know, I know. GET OFF MY LAWN. But I do cringe when I listen to a few minutes of MLB on my iPhone, being a Ting user. These continuous bandwidth things just aren't necessities. And on-device music is pretty cheap. If you don't like buying, try etree.org) By keeping high bandwidth uses mainstream, Apple keeps phone use upmarket, and that's an important part of their company's future. Stay tuned for more high-bandwidth (see current exhibit Facetime over cellular...) uses to come. It's not that there aren't awesome things bandwidth makes possible, but it's not a coincidence or convergent evolution alone that put Apple on board. posted by ruffin at 6/27/2015 01:54:00 PM |
|
|
From a comment on a Lifehacker article: Also, if you open the Image Capture app, at the bottom left thereโs a collapsible menu that allows you to select what application automatically opens whenever any media is mounted. Yes, please, thank you. I had a real rough ride with Photos, and eventually gave up. I'm not sure why Apple keeps thinking a single "file" with all your pictures is a better paradigm than a real file system. And they can't write a good photo management tool to save their lives. I ran into import stalls, app freezes, and the inability to delete (or even cancel deleting) the giant photo file until I turned off the iMac, rebooted, and used rm -rf *. That shouldn't be necessary. NOTE: Don't type that command in your terminal unless you darn well know what you're doing. Photos in OS X 10.9 stinks. iPhoto tried hard, but stunk. That's it. I'm done. Again. I hope. So I'm back to using Image Capture, which I love, with Picasa, which I also have learned to love. Picasa is quick and defensive, sitting politely on top of your file system without really screwing with it at all. It's an easy paradigm, and it updates quickly when you use the Finder to move your files around instead of its own interface. Because, you know, files. The only problem? Photos keeps trying to open when I attach an iOS device. Beautiful. Not even a button for "create a new library", which I believe iPhotos had. I'm growing increasingly confident Apple's testing doesn't extend to edge cases and what I'll call "exploratory misuse". If you don't follow their oxymoronically private Flowchart of Proper Apple Software Usage, too bad. Why isn't there a "make new library" button? Well, because nobody ever needs to do that. Why won't the Photos library delete in 3 hours? Well, because we all know the first rule of Photos Club: Nobody deletes Photos Club. (rm -rf was much faster, in case you're wondering.) This Image Capture trick does the trick (though you have to do it for each source) and makes me like Image Capture more. Simpler is often better. Labels: apple fail, os x, photos posted by ruffin at 6/27/2015 10:27:00 AM |
|
| Friday, June 26, 2015 | |
|
A good, thought-provoking post from Actively Lazy today:
That's true. Full stop. This, the communication price, is exactly why folks are still quoting The Mythical Man-Month. Communication makes [hu]man-months mythical. But what we do with this discovery is what's really important. It's easy to find one solution and think it's the only solution, as I believe Actively Lazy has done. Let's explore two. Pair ProgrammingHere's Mr. Green's (Actively Lazy's author's) take home (emphasis mine):
My quoting makes the argument a little worse; you really should read the entire post in context. But you get the picture. He's arguing that pair programming pays the "coordination and communication" costs as-you-go so that you don't have to pay the piper in spades, to mix some metaphors, later. Pair programming trades function for qualityWhen you work in pairs, you create less working code that is higher quality. This is nearly a truism. If you're somewhere code review's benefits aren't appreciated, run. I'm not saying you should require or even use a lot of code review -- my jury's still deliberating on its best use -- but reviewed code is of higher quality than code that isn't.
At the same time, that "high quality code" may not necessarily do the job better-qua-[characteristic X]. Just to get your started, here are a couple of common Characteristic Xs (feels like we're making Powerpuff Girls)[1]:
This is why iteration is such a catch-phrase for coding (though also see its abuse, here).[2] Get 'er done, then, if you have time to notice (or are forced to notice by poor performance), but only if you have time to notice, make it better. Quality vs. FunctionalitySo is pair programming the answer to removing communication problems? Sort of. Depends on why you think you need to communicate! One key here is that Green only has two developers:
If you only have two devs, of course pair programming kills the problem of deferred communication. But if you have 40 devs, you've now got 20 pairs that could need to sync back up. Twenty pairs is still much, much better than 40 individuals. Twenty pairs should also be much, much better than 20 individuals. The same way letting someone use your alpha version for 15 minutes uncovers bugs simply because they have a different mental model than yours, letting another strong developer take a look at your code will unearth some low hanging fruit (man, I'm mixing metaphors today. Uncover some barely buried tubers?) quickly. And even that single filter means your code will require less change to integrate with the rest of the team. The key take-home is that pair programming makes for higher quality software, not more of it. There's more than a single metric for evaluating software's worth. The other side of code is functionality, often reduced to the reasonably useful metric, lines of code (loc).
And there's a performance bar that tells us if code is good enough. That is, pair programmed code may be higher quality, but it may also not be significantly faster or use significantly less memory, etc. The biggest problem in evaluating how much quality you need is that often you don't know how significant a bad LINQ statement might be until you run it at scale. I think the suggestion is that you have to bias your development culture so that it always strives for high quality with respect to best practices. You want good scalability? Don't overuse ORMs. Don't push logic into the client. Write good SQL. Take time to plan your schema. But I don't know that pair programming, sacrificing half your speed for code quality, is necessary if you have this high-quality culture instilled with a team of well-hired developers. You get the point. The bottom line: Your code is higher quality if it's reviewed for standard practices, errors, and efficiency/scalability first, even if that review time reduces how much functionality you can build. But remember that functionality is why you're in business: you shouldn't sacrifice function for idealistic quality. In other words, you need to discover... The nasty truth: There is "good enough"Imperfect code can still provide acceptable functionality. News flash: There is no perfect code. [Most] Any solution is beyond a certain level of complexity is subjective. That there are many subjective Right Answers to complex problems, and it's precisely because you can only reasonably select one of them that you ask for advice before tackling them. That is, subjectivity is why you design code before coding. Even when you're going to tackle a major problem alone, you "pre-[re]view" with someone else (don't you?!!) your selection from all the different ways you [both] can think of to solve it. The reason you consult someone else first is because you know the problem you're working on needs an extra helping of quality mixed in with the quantity. You slow down, sacrificing not just your but your teammate's output, to ensure your output's quality. Your know your solution -- or at least its design -- needs a second set of eyes. In a sense, though you may not have touched a keyboard [much], you are already pair programming. The conflation of these two code metrics -- functionality & quality -- is what provokes Mr. Green to say...
Sure, one person working alone could bring the code back to a single narrative, but it'd be one person's narrative. You've right back into communication debt. Hopefully it's not as bad as it was before you'd code reviewed with your coworker, but the new communication debt again exists. This single person's progress is another unreviewed revision. And perhaps that's good enough. A different lesson: Seperation of concernsThe real key is that you can't have X people, where X is determined by your company's management, working on the same code at the same time without factoring in the communication costs for getting X folk on the same page. If you don't have time to factor in communication and code review, the new code will be worse in all of those standard ways -- best practices, efficiency, normalization/smart reuse, standardization, error handling. Worse, each person will find themselves coding around or being hamstrung by changes to their narrative made by X-1 other coders. Codebases shared by X devs without coordination is worse than having X coders doing their own thing. You do not get X-times (or "Xx" -- I regret my use of "X" at this point) the work, even if each dev is working on their own seemingly independent story! To get "Xx" functionality (and that's what the company wants, man. When you feel pressure to go faster, and are considering adding more devs to finish sooner, it's because you want functionality), you have to separate concerns perfectly.
The interface is a contract. And objective contracts (strangely possible in code; it's magic) are the most efficient means of communication for software projects. Now you're creating functionality as quickly as is possible. And if done correctly, you can get a lot closer to the myth. But there's a huge, obvious downside... Communication Debt added to your TechnicalAnd remember, if X is large and your stories are done by one person per story, your "communication debt" will be just as huge. You will have technical debt, and you will have a huge learning curve for the new dev if the person working on the code changes. And if the meatware half of the cyborg leaves your company before the knowledge transfer/code review takes place, you're in trouble. The good news? This debt will be firewalled by the interface. That's as far as the bad can go, if you have smart TDD. If X is large and your stories are done by two people, the debt will be more than halved, but your functional output could be more than halved as well! Quality code is hard, (c) 1842. [1] If you want an example of folks arguing against higher quality code for short-term preference Characteristic X, see the constant complaining about JSLint rules wherever JSLint is used. Let me summarize two-thirds of the answers on the JSLint tag on StackOverflow: "If you don't like rule X, you should use JSHint and turn it off." JSLint is a form of code review that emphasizes a set of "best practices". If you're left to your own devices, you might not always follow them. Following them might not make for better code in every situation, but you will have good, standardized code with fewer errors. JSLinted code is higher quality code, though conforming takes more time than not. As I've said before, "If I had to inherit legacy code sight unseen, and could only require that it be JSLinted or JSHinted (pick one), I'd pick JSLinted code every time," and that's because it's higher quality. [2] Hey, look! I've talked about Agile (here regarding documentation) as early as 2004! Three cheers for ourselves! Labels: coding, cyborg, long, management, style posted by ruffin at 6/26/2015 10:20:00 AM |
|
| Tuesday, June 23, 2015 | |
|
Programmers are copying security flaws into your software, researchers warn - CNET: Working more as code assemblers than as writers, programmers are sourcing about 80 percent to 90 percent of the code in any given software application from third parties, many experts estimate. Unless we're talking about third-party libraries (which I just finished lamenting earlier this month), there's no way that's true. Even then, there's no mature, custom codebase with only 20% original code. Gosh, I wish it were that easy. In other words, your WordPress programmer isn't [a programmer, to be clear]. I love the unsourced "many experts" too. posted by ruffin at 6/23/2015 07:57:00 PM |
|
|
Our IT guy just relayed a priceless message from Sharepoint that "explains" problems we're having accessing Sharepoint today. Honestly, it's classic. If youโre having problems accessing SharePoint today, we just got this: The blue stuff is from Sharepoint, the rest from our IT guy (who is a bright, often funny, and thankfully competent dude). You can't make this stuff up. I taught business and technical writing for four semesters while TAing in grad school. I'm not sure I could have taught folks to be this perfectly horrible if I'd tried my darnedest. Wow. In a strange, perverted sense, this was perfectly written. Just for fun, let's translate: We don't know why Sharepoint is hanging for up to 10 minutes, but we're going to try and fix it as soon as we can. posted by ruffin at 6/23/2015 11:51:00 AM |
|
|
Got this message at my outlook account Friday, and just noticed it today: It's time to upgrade to an even better Skype experience on your Windows 8 device. Your current version of Skype is being replaced by an app called Skype for Windows desktop. It has more features to help you stay in touch like screen sharing and group video calling. Also, your chat conversations (from the last 30 days) and all your contacts will appear as normal after upgrading. Ouch. I thought Win10 was going to be more "Metro" friendly, somehow maintaining backwards compatibility with apps written fro Win8, but this is making me wonder if the migration path goes back through the desktop. In other fun, whoever came up with the mbox format wasn't on top of their game. It's hard to imagine in the days where SGML and XML are passรฉ that any non-clearly delimited storage format could gain so much practical acceptance. Wow. posted by ruffin at 6/23/2015 08:56:00 AM |
|
| Wednesday, June 17, 2015 | |
|
I've got a project where I've been "given"[1] code that has lots of display logic in sprocs on SQL Server, and that display logic (colors, in this case) is fairly inextricably tied in with the data I want to use too. That is, we have several tables that hold raw data, and these sprocs both tease out the data and put it, somewhat inextensibly, into what boils down to one giant "row" of data. In other words, the data in the sproc's output is not normalized. That info is, in the original system, passed on to SSRS (SQL Server Reporting Services, a, afaict, sort of SQL Server specific Crystal Reports equivalency) and translated, somewhat painfully, into SSRS's pseudo-html. You can get those results into another table pretty easily, via building a table whose structure matches the output and using something like Because of the way we've got this set up, all the values are in a single row that extends until the end of time, with specialized display column value after specialized display column value interspersed with the raw data we want to operate on.[2] The Right Thing To Do would be to rewrite the sproc to give us normalized data. But The Quick Thing would be to try and get each column name and value along with that value's display info into a single row we can easily JSON up and send, packaged with a little more display info that'll replace the static setup we have in SSRS, to the client. So from this:
We want to have...
We can join on So how to turn the first table into one that can be used to build the second? Generic caseI'm not sure why, but I had to stare at some examples for a while before I kinda got what was going on with
So far, so simple enough...
Here's how you need (or how I needed) to think of what comes next -- Each column is a name and value pair. And we have a choice with each column. We can:
Let's start by unpivoting every
The result is reasonably neat. Remember that, in my use case, there are no ids and only one row, so it'd stop after the third row displayed below.
Just for fun, let's experiment with option 3, "display a column as is", by leaving
Fewer rows, and an extra column.
This shows us that what's explicitly listed in the unpivot clause are the only columns that we're going to "explode" into row values, which allows some interesting uses. There are some crazy caveats, however, like that the cols in the UNPIVOT have to be the same type and length apparently. Check out this link at "If we attempt to use the UNPIVOT operator to do this, we run into trouble". That's pretty painful, and requires some wacky casting to keep up the shortcut charade, below. One row mash-up use case (kludges ahoy!)Let's also create something closer to my original use case, so you can tell exactly how it's useful.
select * from #valuesAndDisplayInfo; That gives us the giant single row of data that parallels what the sproc I talked about gives me.
That's kind of nasty. There are three values with three colors for display, all in the same place. Am I supposed to just JSON that up and look for every label in the format So let's
Whoops!
Remember that we have to have all of the column types the same in our UNPIVOT list. Let's get cast crazy.
Though the casting stinks, that's not puke out loud horrible, but I really wanted the colors to be on the same row as the raw values. Right now, they aren't.
Here's the bullheaded, inefficient, magic string way around that that we probably oughta integrate earlier.
Success. I don't love it, but you can see how unpivoting helped us get here.
Pretty good introduction and further exploration of Again, The Right Way is to rewrite the sproc, which I think I'm going to do, but this is [1] Hrm, not exactly "given". Inherited? What's it called when you have contractors who are hired before you start and pick a stack that's probably not what you would've picked? [2] "... on which we wish to operate"? Labels: SQL posted by ruffin at 6/17/2015 04:12:00 PM |
|
|
That tab separated values to ASCII (and Unicode) table-like structures generator is here. And here's the sauce of that link (for me, at least). Particularly useful for StackOverflow. Labels: html, markdown, noteToSelf posted by ruffin at 6/17/2015 02:19:00 PM |
|
| Monday, June 15, 2015 | |
|
We had one guy at a previous job responsible for hunting down the ins and outs of time management with JavaScript clients (well, and our C# server-side), and I remember him basically boiling down what he'd found to, "Dates are a pain." One level more technical than that: "Convert everything to UTC immediately, all the time, except for the final step before displaying to the user." I've worked with dates enough to know he wasn't kidding, but this takes the cake:
Seems a better fix for ES6 would be to have parseLocal() and parseUTC() (or even a parseISO() and parseColloquial(), if you're not into the whole brevity thing) with a legacy parse() sticking around to keep acting consistently wonky [sic]. With public APIs comes public responsibility. And once you release v1, you've largely already sunk yourself. You will find mistakes, and you will have to fix them with an eye on preserving "wrong" legacy behavior. I hate breaking "fixes" like this one just a little bit more than bizarre, hard to understand, initial decision like, well, this one. Labels: dates, javascript posted by ruffin at 6/15/2015 03:15:00 PM |
|
| Thursday, June 11, 2015 | |
|
TIL:
Labels: javascript posted by ruffin at 6/11/2015 10:52:00 AM |
|
![]() I'm in a spot where Thunderbird is probably the best way I can access Exchange, and I'm fairly happily using ExQuilla. My only complaint is the inability to send encrypted emails, but that's not a huge deal, since those aren't all that common. But I'm also using the active Ericsson branch of Lightning as an extension to get my calendar, and things aren't nearly so consistent. Just this morning, the calendar tab told me it had 43 queued jobs. That's not great. (I'm not quite resisting the urge to say that I have 43 queued jobs, but... ain't one.) I also couldn't send mail while I was waiting, as "Write" was greyed out. I eventually quit. And yesterday, I had to hard-quit Thunderbird from the Task Manager when it wouldn't close all the way -- the windows were gone, but the process was still running. That's happened a handful of times after a Dismiss button issue (see below). I ascribe most of this weirdness to Lightning, which might not be fair, but there was a check in just a few hours ago, and the fact that the Dismiss button doesn't work reliably really bugs me. I don't care if you haven't successfully updated the server yet. How about keep that reminder window closed for the running app. It should be easy to take the Dismiss request and keep your inability to update Exchange silent, or displayed in a less intrusive way. Is it really that difficult to keep a list of ids for events you should now ignore? Btw, to the folks who clip their nails at work (I'm looking at you too, RMS) like the two dudes who do it in the current office, please heavens stop. I'm irrationally fearful of one of them dropping into my coffee. I mean, unless you're just passive-but-very-aggressively poking at someone, at least take it to the bathroom, right? Labels: email, outlook, thunderbird posted by ruffin at 6/11/2015 10:26:00 AM |
|
| Wednesday, June 10, 2015 | |
|
From the numbers released by the Dustforce team, a game that's on the currently indie Humble Bundle at the "pay anything" level: The Humble Bundle was a great success: we made roughly $153,915, and unlike the last promotion, we did notice an increase in Steam sales afterwards. With such a huge boost in the number of people playing Dustforce, the amount of daily sales jumped up from under a dozen to around 50 or 60 copies per day. [Emphasis mine -mfn] If I see a game I want at a higher level in the Humble Bundle, I often check the going price really quickly to see if it's less for just that game, especially when I want to play on iOS. That's how I bagged 2-bit Cowboy, though I didn't find it nearly as engaging as I'd hoped. I wondered if what the Dustforce guys saw means others are doing the same thing. That is, being in a bundle is great marketing to a new audience, which can drive sales for people who aren't necessarily paying tons of attention to the "obvious" sales channels. Labels: bundles, game, humble bundle, indie, sales posted by ruffin at 6/10/2015 07:55:00 AM |
|
| Tuesday, June 09, 2015 | |
|
So far, this news is the most immediately useful story to come out of WWDC 2015 for me so far:
I just started playing around with making a game with SpriteKit, and was quickly thinking the only way to really test if it feels right is to get it onto real hardware. Until now, I couldn't without shelling out $99. $99 isn't a big deal overall, but it was a significant barrier to entry if what you're doing is just a hobby/side project. It's not that $99 a lot, especially when you figure out how much productive programming time is worth (ie, you're blowing thousands of dollars in time for projects that take more than a few hours if you're any good at all), but it's a lot if you don't release anything. Too many careless Benjamins add up. As interesting for me is that I can pay $99 once and release all over the Apple ecosystem. I wasn't real excited about getting dinged twice for Mac and iOS, and hadn't planned on getting iOS-serious for a while. Now maybe the game, if I finish it, will get released, even if it's crud, just 'cause. Sunk costs and all of that. Hrm, maybe this isn't such great news after all. ;^) Labels: apple, development posted by ruffin at 6/09/2015 10:21:00 AM |
|
| Monday, June 08, 2015 | |
What does any of this crap have to do with a developers' conference? If you want another keynote, Apple, have one. Concentrate on the "D" in WWDC when at WWDC, kk? (Yes, the accompanying image was edited with MSPaint. Desperate times and all that.) posted by ruffin at 6/08/2015 03:38:00 PM |
|
One last thing... Only today (this morning) did I notice that NIN's The Slip had different cover art for each song. That's a really nice touch. Say what you will about Reznor, it seems he's in it for the art, not profit maximization. The rest was crap. I think that feeling's universal. posted by ruffin at 6/08/2015 02:57:00 PM |
|
| Saturday, June 06, 2015 | |
|
I've had a burning desire to make a mail client for, well, decades, I think I can finally say. It's been a pretty popular space recently, but two more announcements have me even more scart [sic]. First, what had been a very quiet Postbox finally hit version 4. Argh. I thought they'd left the game, though I guess it's good business news that they've come back. And daggum Brent Simmons quit his night job and did so because... I decided to leave because I wasnโt working on the software that Iโve been obsessed with for more than a decade. Gosh, I wonder what that could be. I mean, honestly, on some level, it's insanely egocentric of me to think we share an obsession. On the other hand, well, wouldn't that just beat all? I think at some point you just have to tell yourself screw it, I can make something as good or better for enough folks that it still makes sense to try, and just give that cockiness a roll. But man, it was a crowded space already. Guess I'll at least stop wanting to scratch the itch once I've, well, scratched it, and then I can get on with my life. ;^) posted by ruffin at 6/06/2015 09:53:00 PM |
|
| Friday, June 05, 2015 | |
jQuery, the Rosetta DOMI've recently been rolling a node project essentially from scratch, largely so that I can actually learn Node, but also largely due to my aversion of introducing unnecessary libraries. Ever since the first ADO wizards for VB6, all too often libraries do the first 80% of what you need in record time, but getting the last 20% (which can be features, but is often also optimization and bug squashing/code-arounds) done takes more than twice the time you initially "saved". And except for what was just short of a two-day rabbit hole into returning gzipped files (I decided I should gzip on the fly, which is cool, but not mvp), it's gone almost painlessly. Over the years, I've found that you usually have to be proficient enough to write your own library to use an external one mindfully, and that time, imo, is often better spent writing code that targets exactly your own use case instead. Nobody knows your pain points better than you do, and nobody should know how to take them away better either. Well, I've often admitted one exception to my library aversion in client-side Javascript [1]: jQuery. Even I (circa 2008-2013) thought you'd have to be an idiot not to include jQuery by default. Recently, though, even my jQuery love has waned. See http://www.sitepoint.com/jquery-vs-raw-javascript-1-dom-forms/ for some context:
Or, as Rob Niedermayer (I'm assuming) says on knockoutjs' website...
And here's a quote from the "Do You Really Need jQuery?" article:
The title to that section really is the take-home. jQuery is an abstraction layer to make fairly different browser object models look the same when you're coding. jQuery was a Rosetta DOM for browsers in the 1990s and 2000s. But if you've spent much time on the Mozilla Developer Network, you'll notice that the browser compatibility sections are starting to look a lot less nasty. Non-IE browsers have had good standards support for a few years now, and if you can limit IE to 10+, well, I'm not sure you do need Now look, jQuery rocks. It's, to use "rocks" as many times as possible, rock solid in its rockery. I really like using it, and, during a job interview a few years ago, when asked about Javascript libraries, I said jQuery has a special place in my toolbox where I consider it as good as a first-party lib. It's that solid, in my experience (vs., say, ExtJS 2 or ArcObjects in the 'aughts. Man, those were the bad old days.). jQuery, the gateway drugSo I'm not opposed to jQuery's use, except that I've slowly come to see that jQuery is a gateway drug. I've talked about overuse of client-side templating here before (the only one I can find right now is this one; surely I've spilt more pixels than that), and I'd argue that the situation my team found itself in there -- where, as a company, we had a system that nobody bothered to test at scale on IE8, our minimum sys requirement, and our KnockoutJS-based system was completely dead on that platform a few weeks before release -- was enabled by our overly comfortable approach to library use. Heck, I'd consider recommending not using jQuery simply because so many libraries depend on it. Without jQuery, you can't easily slap in everything and the kitchen sink. Unfortunately (he said only half-jokingly), Angular and other templating libraries have their own mini-libs that will gracefully step in if jQuery isn't used, so you can still get into trouble if you're not careful. Here's my quick 2ยข on templating: If you have data-stores and business logic on the client, you're doing it wrong. It would be interesting to figure out how often Google's folks use jQuery with Angular, and how often they limit themselves to jqLite. In a sense, creating jqLite for Angular is exactly what I'm proposing here. But if you require folks know how to write standards-compliant Javascript, it quickly becomes clear that your company's culture prefers writing code tailored specifically to your set of problems rather than grabbing something off the rack you're going to be hemming and patching for life.[3] Over the long term, especially if you don't have legacy code and legacy browser users, you might find you save a lot of time and money.
Update: Looks like I'm not alone thinking the library soup needs to end. Interesting post from Allen Pike, which also touches on Angular breaking backwards compatibility with version 2. It's bad enough how quickly these things go by the wayside from neglect. Do popular ones have to die too? Of course, the bottom line is that if you'd stuck to vanilla, standard-compliant browser-based JavaScript (or minimalist libs), you'd be completely (to very) insulated from moves like this. And, as I mentioned in the aside, if anyone Does the Right Thing, Angular 1.0 will continue to live well past the team's leaving it, if that's what they plan to do. That's the proverbial beauty of open source. [1] There are certainly other libraries I don't mind using, and some that get close to this "good as a first party" status, but they're few. I really like MailKit so far on .NET, for instance. And of course I like "real" dbms systems. But I'm library adverse to the point I always have to double check that I'm not exhibiting NIH syndrome. [2] I realize this is easier said than done. But places that have to target IE9- are being driven by corporate clients whose workplaces are stuck on Windows XP, I'd wager. The usage numbers suggest that IE10+ is well over half of all IE use, and that leaves less than 7% of users worldwide that need to learn how to install Chrome to their Documents folder. [3] Yes, I realize you'll be hemming and patching the bespoke stuff too. Hopefully you'll still suffer the analogy. Labels: coding, javascript, jquery, long, style posted by ruffin at 6/05/2015 11:01:00 AM |
|
| Thursday, June 04, 2015 | |
|
I was trying to install SQL Server Express so I wouldn't have to connect to our dev database, but ran into the dreaded "Setup account privileges" issue (image above, I think). If you're having this problem installing SQL Server Express, I can't guarantee that you're going to get great news here, but this is how you'd fix it if you could... You can open the Local Security Policy interface (or the Group Policy editor) and add the permissions you're missing from there, but the buttons to add to the ones I needed were disabled for me, even though I'm an admin on my box. This is apparently known as the HasSecurityBackupAndDebugPrivilegesCheck rule. The best "how to nuke this issue" I found was from this kb article from MS:
Now that'll do it. Well, unless you can't. I can't.
PRIVILEGES INFORMATION Privilege Name Description State ============== =========== ======== SeIncreaseQuotaPrivilege Adjust memory quotas for a process Disabled SeSecurityPrivilege Manage auditing and security log Disabled ... SeBackupPrivilege Back up files and directories Disabled ... Argh. Oh well. It's always a little painful when you have enough power to really get your box into trouble, which I do, but then you can't install something that'd actually help you do your job, making things better for everyone. /sigh But if I could've installed Express, that's how I could've fixed the SeDebugPrivilege, SeBackupPrivilege, etc issue. (Note that using Labels: noteToSelf, SQL Server posted by ruffin at 6/04/2015 11:58:00 AM |
|
|
| |
|
|
All posts can be accessed here: Just the last year o' posts: |
|||||||||||||||||||||
|
||||||||||||||||||||||
|
|
|
|