Monday, March 1, 2010

How *not* to sell your product

lets play imagine... Imagine you have a killer app, it slays all it's competition. You, the author and chief architect have laboured tirelessly in the bowels of the corporate giant until one day you could take no more, and taking up the yoke of liberty you cast off the shackles of corporate dictatorship! Taking all your knowledge of how and how not to solve the problem, you go off and write your dream app, the one you always wanted to write, the one your old customers were clamouring for, it sings, it dances, it is elegant!

Armed with such perfection, and knowledge of your previous customers, you successfully market it to them, who migrate en masse and breathe collective sighs of relief as all the things that used to be hard or impossible become easy, turnkey even!

Then what? Well, why stop there, you go on to find new customers and opportunities. But, your company being small and agile, you the chief developers find yourselves being the prime second tier salesmen/support, and sitting in bright rooms explaining why people should spend large amounts of money buying your software.

This is where you can go horribly wrong... Up to this point, you've played the game well, dodging and swerving and outmanoeuvring the competition. But now, the temptation is to continue as before, treating the problem as a purely technical one, with technical solutions. While this has been true up to now, to really succeed at this point you need to realise that you've crossed a boundary. As soon as someone in the room has a title like "Chief X Officer", or even "Manager of X", it's a whole different dialogue.

The worst thing you can do here is continue with what's worked so far, which is explaining what the software does, how it does it, and most importantly, how smart it is for doing it that way. The truth? They don't care. They want to know about Value! How your software makes them money, saves them money, gives better visibility or makes life easier in some significant way. That's how you'll sell them your product.

Even when talking to the poor developers who must customise your shining example of perfection, espousing it's glory in mundane detail will just bore them to tears. Explaining how things work is meaningless. Two weeks of descriptions and hand waving isn't worth a few thoughtful worked examples in source.

Ah.. Source! If you're planing on letting customers or 3rd parties customise your application (notice I didn't say code), give them the source! Hire a good lawyer to come up with a licence and a CiC or NDA if it makes you feel better. This sends a strong message of trust to the client/partner and makes life a *lot* easier for any customisers getting up to speed on the internals of your application. If you don't they'll just decompile it and hate you, so you might as well give it to them (once they've paid of course!)

To summarise, being a great coder/architect is invaluable, but if you actually get successful at what you do then you need to realise that tailoring your pitch to your audience is the next skill to learn. When you know the internals inside-out (literally!) it's all too easy to wax lyrical on implementation details. It's a lot harder, but much more beneficial, to translate your knowledge into terms the people in front of you can understand. Then you're not just programming software, you're programming wetware.



Thursday, March 19, 2009

CYA security

I read this article when it came out a few years back;

Really good coverage on why the existing security measures where ineffective against the threats encountered until *after* the threat had already been encountered, which is too late.

reading this article now:

reminded me of it, especially this bit:
"In the final three months of last year, (AIG) lost more than $27 million every hour. That's $465,000 a minute, a yearly income for a median American household every six seconds, roughly $7,750 a second. And all this happened at the end of eight straight years that America devoted to frantically chasing the shadow of a terrorist threat to no avail, eight years spent stopping every citizen at every airport to search every purse, bag, crotch and briefcase for juice boxes and explosive tubes of toothpaste. Yet in the end, our government had no mechanism for searching the balance sheets of companies that held life-or-death power over our society and was unable to spot holes in the national economy the size of Libya (whose entire GDP last year was smaller than AIG's 2008 losses)."

So ok, terrorists haven't managed to fly anymore planes into buildings since 9/11, thats a good thing. In the meantime personal liberties in western countries have been eroded by an enormous degree, the US has invaded a country (that still shocks me!) on false pretences, and become a party to illegal detention and the use of torture. What price your soul?

At the same time, the greatest threat to US (perhaps the world?) National Security since the end of the Cold War has been brewing, in the form of the current freeze-up of the financial markets triggered by the sub-prime mortgage collapse. So we successfully defended against future repeats of the last threat (arguably some might say), while completely missing the next great threat. If you tally up financial cost and increased mortality due to financial-related suicide & murders during this period, I dare say the cost in dollars and human lives will outstrip 9/11.

Yes we need to guard against attacks similar to what's been encountered in the past, but we also need to have open eyes in seeing dangers from new and unexpected directions. Prior to 9/11 there was intelligence about a potential attack using planes that was ignored, leading up the current crisis there were warning signs that were ignored. We need to stop ignoring good intelligence just because we haven't seen it before.

Monday, March 9, 2009

Sheeple or the wisdom of crowds?

This is disturbing:
But on the other hand,what of the wisdom of crowds?
Whats going on here? It seems that on the one hand individuals can be easily manipulated using peer pressure from groups, and on the other crowds seem to make better decisions than individuals? Perhaps these are complementary points of view, not contradictory? Perhaps the conscious action of the majority of people can be easily swayed, but the unconscious action of groups (displaying emergent behavior?) tends to cluster to optimal points of maxima far faster and more accurately than the conscious efforts of experts?

Could it be that while human psychology is deeply flawed, the behavior that emerges from interactions between individuals and environments can be insightful?

How do we engineer society to avoid, or at least minimise the harmful effects of our own nature? Sometimes we need to follow the crowd, sometimes we need to ignore the crowd and travel against the flow, how do we decide when to do which?

A lot of questions, no real answers.

(edit)
Coming back to this post later, it occurs to me that perhaps we should observe the crowd and be aware of what is causing it to cluster or disperse, but not necessarily follow the crowd. Know thyself...

Thursday, March 5, 2009

The joy of refactoring

Imagine you have an infrastructure that allows you to run batch jobs asynchronously. However, you can't specify exactly when you want it to run (à la cron) but rather all you have is a polling interface, so your "service" will start up every x mins, do something, then sleep again. Finer grained control is up to you.

This is our situation, and we usually specify a start time, eg - "23:00", an end time "24:00", and have a bit of logic somewhere to abort the run if the current time is outside that window. I finally noticed the bit of logic that does this, and realised that it was duplicated in every service has a *copy* of the same code. behold:



Yuck.

Needless to say, I didn't like the duplication, or the code itself. I didn't have scope to retrofit the other services that used similar code, but I could fix this one and build a foundation for later.

First step was to refactor that mess out to a common util class, leaving us with this:


...
and this:



The first cut was just a raw refactor extract, the next step was to not use ugly Calendar code and handle the midnight cutover correctly (which is actually quite hard!). After a few false starts with Dates and dealing with GMT and daylight savings offsets, we ended up with this (convenience methods and javadoc not shown), which is unit-tested and guaranteed to work (except maybe during the daylight savings cutover itself..)



Much nicer, well we think so anyway.

If you can see a bug in there please let me know!!

The joy of Spring!

Had the opportunity to Springify a co-workers project while he was on leave the other day. It already sort-of used Spring, but he had inverted the inversion-of-control (yeah... think about it, passing the context everywhere and getting single beans out, scary) and because of the architecture we were variously running into "got minus one from read call" errors (Database-side connections exhausted) or "No ManagedConnections available within configured blocking timeout" (JBoss Connection pool exhausted) depending on whether you used his oracle config (multiple connection pools with max-size of 100) or mine (max-size 10 per pool).

So! Time to throw out cruft, delete code and do proper dependency-injection! always great fun. I once did a project where most of the risk was in the business algorithm, but the data requirements started fairly simple. So I rolled my own database layer and got going. Later in the project the data requirements started to grow, transaction demarcation and correct handling became an issue so I took the time to Spring-ify it, and the joy of going from raw JDBC sql to JdbcTemplates and Hibernate HQL was huge! dozens of lines of verbose JDBC try-catch-finally evaporated to one-liners. just great!

I do love taking well-written OO code and re-wiring it with Spring, you can throw out huge amounts of code and eliminate bugs that you might have never known about! Connection pooling issues, correct rollbacks, host of junk like that. A lot of the ugly casting and boilerplate code just evaporates, and you're left with get-setters (which is another issue entirely, we should have properties!)

At the moment I tend to use a Builder pattern to boot-strap Spring from the init code (not a full spring stack yet...) and get a handle to a Spring bean configured with all the resources (it knows how to orchestrate the DAOs and usually contains most of the bus. logic) which then takes over processing. I could probably do better, but it works pretty well for the moment and is easy enough to unit test, so I'm happy.

Thursday, February 19, 2009

No more garbage collection pauses!!

I was listening to a back episode of the Java Posse the other night from the '07 roundup "Whither java?" session (around 63:10), and heard someone mention the "-Xincgc" option for the Sun JVM that changes from the default collector with pauses and all, to an incremental collector.

This changes the behavior from big, ugly, noticeable pauses for garbage collection full sweeps to an incremental model where the pauses aren't noticeable, with the trade-off that it uses more CPU overall. So for batch-type, long-running CPU intensive operations the default collector will out-perform the incremental garbage collector marginally, but for user-visible operations the big noticeable pauses go away.

Technically, this forces the JVM to use the Concurrent Low Pause Collector, as documented in Tuning Garbage Collection with the 5.0 Java Virtual Machine. Interesting reading if you have the time.

Script it Script it Script it!

this has been said many times before, but I'm gonna say it again, anything worth doing twice is worth scripting!

In our environment the development databases are refreshed from production and sanitised weekly. This is scripted, it happens on the dot, every week, without fail (excepting catastrophies!).

All of our local builds are automated, type ant or mvn compile and magic happens.

All of our server builds are automated, svn ci causes a build to kick off automatically, and notifies you if something goes wrong.

The project I'm currently working on requires some manual configuration and then proceeds to turn the database inside out, and is not easily backed out (I've tried, and it was more pain than it was worth!). So for every full integration test there's about 8 separate configuration steps before it can be run. They've all been scripted (SQL in this case). Now I know I can refresh my database from production with sanitised data, run one script to reload the config, and run a full 4hour integration test, all with complete confidence that I'm running off a fully reproducible slate.

The first time you do something, fair enough, do it without regard to rigorous scripting (but save any commands you run). When you come to do it a second time, you have a hard decision to make. If there's even the remotest chance that you'll be called on to do this again, then script it. In fact, even if you probably think you won't have to, script it anyway. The number of times I've had to repeat the thing that "is surely just this once-off and never again!" and later wished I'd scripted it to start with.

I've found in my experience that the time it takes to script something, most of the time, is only slightly slower than the time it takes to do it ad hoc. When you script something you exercise the muscle between your ears, which is worth it if nothing else. Then as soon as you have to do it the second time, it pays for itself, and every single time after that.

I once had to step in for a client and run some web statistics for them as the usual staffer who performed the role was away. This normally took him the best part of a day to perform, between copying log files from the server, setting up the config and waiting for the tool to process the logs, then copying to results to the web directory. The first time I did it, I took about 2 days, maybe three, this time included figuring out the process as well as scripting it while waiting. The second time it took me the 5mins to update the links on the stats page to point to the new stats, as everything had run automatically before I got to work. Time well spent.

I'm sure I'm preaching to the choir here, and everyone reading this blog will be nodding their heads as if I'm ranting about how rocks fall down instead of up, but it's something we all too easily forget in the rush of everyday, to take the time to keep our minds and our tools sharp.

Got a large log file to parse(1GB+?)? Instead of wrestling with some editor, try writing a perl/python script to parse it for you, or even a shell script with a grep pipeline.

Let the machine work hard while you work smart. You'll save time, be more relaxed for it, and who knows, you might even enjoy it!