Wed, 02 Sep 2009
Verified by Visa - Designed by idiots
The one thing online that irks me beyond all others, even surpassing chromatic, is Verified by Visa. I hate this service and every site that uses it.
If you've been blessed enough to never have it ruin your transaction here's the short version - in the middle of paying for something you get bounced, with no clue where you're going and how secure it is, to a third party site, which is completely safe as it's run by visa, that then gets you to enter a password. Or if you don't know it, create a new one using nothing more than what's on your card.
Firstly how stupid is that? What ever happened to something I know,
something I have? If I find a lost card I can reset the Verified by Visa
password using nothing more than my powers of reading and typing. While
we're on the subject of passwords - you're not allowed to use special
characters. Numbers and letters only. Thanks, rule out half the
possibilities in one sweep for me. And what's with the remembering history?
This thing makes elephants look like
/tmp... it remembered all
the verified passwords I've ever used going back about ten iterations - and
O change the password a lot because I can't use a decent one and it's
easier to reset it than to dig out the old one.
Now suppose you've logged in, got the password right and clicked next, what happens? You get a session / transaction timeout and you have to go all the way back through your order. Thanks for that.
If your site makes me jump through these fake security hoops then I'll go elsewhere. I won't play along anymore - you'll just lose my custom. And hopefully that of many other people.
Tue, 03 Mar 2009
Personal Git Milestone - First Accepted Patch
It's been a day for nice little technical surprises. On the tube ride to work this morning I started flicking through Cisco Routers for the desperate (2nd edition) and found a quote on the first page from the 1st edition book review I did a couple of years ago.
I also had my first fully git workflow patch accepted by upstream. It was only a couple of lines of code but it means I'm gradually getting comfortable with the git toolchain.
Tue, 07 Oct 2008
The answer might be 'it depends'
You're in charge of a server that provides two types of assets. The first type is public and its visibility is important to your company. The second should be restricted access only and shouldn't be public.
Now suppose there is a mistake made and the private material is exposed publicly - what's more important, that the public data is available or that the private data isn't? Who'd make that decision where you work? How long would it take to get an answer from them?
Sun, 03 Jun 2007
Extending the Nagios CGIs - Discouraging Casual Commiters
While working on my Nagios display tools I wanted to modify our existing Nagios deployments to easily link the information in but after a quick dig I discovered that something was very wrong - the Nagios CGIs are written in C.
While shell and perl are my current languages of choice I can write (a very little and very basic) C but the idea of customising webpages in it, especially pages this critical to the company, stopped me in my tracks. While I can understand using the language you're most familiar with when writing software if you want to attract contributers you need to match the language to the task. If the Nagios front ends had been written in any of the dynamic languages I'd have spent some time to understand and hopefully add to them - but not C, it's the wrong level of language for this kind of work.
Mon, 16 Apr 2007
Fractally Crap - a system where any piece, when looked at individually, is every bit as broken, badly planned and undocumented as the rest.
And yes, I know that if you pile rubbish on rubbish then you get...
(strangely enough) rubbish but you can normally find the occasional little gem or ray
of sunshine. Not this month. An often seen symptom is that every RT ticket you close requires three more be
opened for new
issues problems challenges that arose
while fixing the first. And no, this rabbit hole doesn't have a
Not my best fortnight ever. Roll on the Nordic Perl Workshop!
Fri, 19 Jan 2007
Black FireFox Baseball cap - Lost at LCA
And it's probably missing me by now, a beer will be purchased for the finder.
This is what blogs are really for ;)
Mon, 08 Jan 2007
Change Control and Version Control are NOT THE SAME THING
And now to one of my pet annoyances...
Change Control is a formal process used to ensure a product, service
or process is only modified in line with the identified necessary
-- Wikipedia - change control
Revision control (also known as version control, source control or (source)
code management (SCM)) is the management of multiple revisions of the same
unit of information.
-- Wikipedia - revision control
As you can tell from the *different* definitions these two terms do not mean the same thing. They are not interchangeable and, ideally, both should be present. If you're maintaining multiple versions of source code or config files then you have version control. Not change control.
This rant was bought to you by over half the places I've worked. Ggggggrrrrrrrrhhhhhhh!
Sun, 07 Jan 2007
Open Source Questions and the Karma of Answers
I answer a couple of emails that contained questions about code I've written and in return I get a shiny new release of WebService::YouTube which fixes a bug I hit. Gotta love the 'net.
Mon, 01 Jan 2007
Why Don't we Have a .bank?
Why don't we have a .bank or .bank.country_code TLD that's regulated by the same people that regulate the banks themselves? Most countries, with the notable exception of the US (which has multiple National regulators and a second tier of State ones), have a single body regulating all the banks so why not use their established trust metrics (you must be at least this tall to be a bank) to determine who can have a .bank domain?
In additional to helping people find their bank online (although if they can't find it should they be doing online account management?) it'd help prevent a lot of phishing. I like the idea of a decentralised model (which would have the benefit of local knowledge) rather than a single globe spanning group but decentralisation does seem more likely to end up having a very weak link in some small, "legally interesting" country.
Wed, 22 Nov 2006
MS Technet Labs - Initial Impressions
I worked through the MS Technet Virtual Lab Express: Introduction to ISA 2006 Beta demo last night and while the product doesn't really interest me (I couldn't deploy a Microsoft firewall and keep a straight face) the lab itself was interesting.
You enter an email address, download an ActiveX control and wait five minutes for the lab to prepare itself. You then connect, via something that looks like Terminal Services, to four machines. You then work through the lab notes (a PDF) to get an overview of the new product features. It was actually quite a pleasant experience, the machines felt responsive, you can click around to look at parts you want and it took almost no time to work through the lab notes. Only downside was the screen size, it was damn small so the dashboard felt tiny and cramped, which isn't what you want when you see a product for the first time.
Due to the excellent price (free!) I'll be looking at some of the others to get a feel for what the newer releases can do.
Wed, 04 Oct 2006
Where are the Second Generation RSS Readers?
I'm subscribed to a lot of RSS and Atom feeds. I've tried online readers but I never found any that could match the user experience of SharpReader so I stuck with it on the desktop. But now I'm starting to want some functionality that none of the readers I've looked at seem to include.
Firstly the easy stuff, when was the last article posted on a blog? When was the last time I clicked through it? (I typically double click everything I want to read so it opens in a new FireFox tab. I then work my way through them.) How many posts have been made in the last N months? And, it's a little different to the other requests, I also want it to try and open posts through Google Cache or the Coral network if they've gone away or the remote site is down.
Now we get to the more... odd stuff. I want to know how long, in total, I had the last twenty posts in the current feed open. This'll help me remove sites that I delete items from with just a glance at the post title. When was the last time I received a successful HTTP response code? (I recently did a clean out of my feed list and removed anything that 500 or 404'd at midnight, every night, for five days). I can gather some of these by working through the application logs but I'd like a nice, GUI, way of seeing it.
My time is one of the most valuable things I can give a site (Amazon and Play already own my bank accounts ;)) so where are the tools to help me spend it wisely?
Tue, 03 Oct 2006
Transcribing Comics - A job for the Mechanical Turk?
There are a couple of webcomics I read on a daily basis (a couple a week in the case of MegaTokyo) and recently I've found myself wanting to link to a couple of different strips in blog postings; and then discovered that they're almost impossible to search through.
None of the webcomics I read regularly have any kind of strip content search. You can't see who was in which strip, you can't search on the punchlines - which is what I want - and, apart from a couple of sites which have a one sentence summary, you can't get any more context about any days issue than the time it was uploaded. I understand that transcribing them would be dull as hell but why not just farm the work out via the Mechanical Turk?
The images are already online, you can filter by asking a couple of sample questions (which of these is Dogbert) to determine that the speech will be attributed to the correct person, poll three people per strip to make sure you don't get bitten by typos, and with a decent "submit results" web form you'd get some nicely formated data (who spoke in which panel for example) for free. And I get to find the Dilbert strip that ends with "You'd have to lift your arms up and run around."
Mon, 18 Sep 2006
A couple of months ago a friend of mine changed jobs and went to work with some mutual techie acquaintances. What made this job interesting to me was the confidential nature of the project and how little he was allowed to say about it. In one of my flippant comments I mentioned that if I REALLY wanted to know I could find out what he was working on. And the bet was made.
I had a little bit of an advantage, I knew him, I knew a couple of his co-workers (all close-lipped buggers when it came to their projects) and I knew their email addresses and del.icio.us handles - and from this I went on a little trip. Watching what they posted to del.icio.us during the day (people feel a little less guilty about tagging stuff for work during work hours), looking at where their email addresses appeared (I got lucky and found a logwatch filter one of them had wrote for an application server - which was a nice pointer) and, by complete fluke, spotting two of them in a conference photo a mate forwarded to me. I had a number of areas they all seemed to be interested in.
After applying a little bit of filtering, anything hitting the front page of del.icio.us was was ignored, dropping anything relating to their CPAN modules, freshmeat projects and, in one case, published articles, I was left with what looked like an application stack. And some pointers to the vertical industry they were working in.
So where am I heading with this? Well first of all, curry bought by someone else always tastes nicer than curry you've had to pay for! Secondly, I was surprised by how little effort it took to see what they were interested in with regards to both work and personal projects - even without them writing blogs.
Wed, 13 Sep 2006
Failover Pairs - A short Rant
Let's cover the basics, if you've got two machines working as an identical failover pair then THEY SHOULD BE IDENTICAL. Adding services, hell, adding nearly anything, to only one of them is a mistake. You've now created a bias on which one you need running and you can no longer assume they'll both do the same thing in the same situation. Which defeats the whole point of having them. This might seem obvious, but the number of people who break this simple rule never fail to make that pretty little vein in my neck dance.
Now we'll discuss testing the failover. You should do regular, scheduled and signed off, failover tests. It might be difficult to get permission for a test when everything is working. This is typically because people don't have enough confidence in the technology, people and process - often accompanied by uncertainty about the length and impact of the outage. In a very chicken and egg style you can only get confidence by (successfully) performing the test and measuring the impact. You should have a staging setup that'll let you perform the test as many times as you need to get it down pat. And then a couple more times just to be certain before you perform it in production.
This is also solves one of the related problems, things that happen rarely don't get tested or explained and the documentation drifts out of sync with reality. You should have a set of machines in staging that the new guys can play with, these should be tested (with the documentation) on a set schedule.
An untested failover pair are a working machine and a hope - nothing more.
Provisioning a Fresh Server Install
Once a machine has settled in to a rack how long does it take you to turn it in to a working server?
How many of these steps are automated? The longer you can go without making manual changes the more comfortable you can be that the machine's running as it's supposed to be.
What little tweaks do people make once the machine is up? How do you know they've been done correctly on each machine? Do you have a small bundle of configuration checks for local modifications? What happens if they get nuked? Do you notice or do they just drift further out of sync with the baseline deployment (and each other)? Do you use an integrity checker on all machines looking for unauthorised changes?
How long does the complete process take from start to finish? How does this fit in with your MTTR numbers? If it takes an hour to build a server and you've got a MTTR of 30 minutes on a critical mail server then you've got problems.
Do you need to manually add new machines to other, external, systems and/or processes? Nagios for monitoring? DNS? Documentation on your intranet? How do you keep these in sync and how often are they audited?
Why is it not as easy as just plugging the thing in anymore :)
Mon, 11 Sep 2006
Dynamic Languages and the Big Players
Over the last week both Ruby and Python have had moments in the sunshine, between Jim Hugunins (now of Microsoft) IronPython 1.0 release and Sun hiring JRuby developers it's nice to see the bigger players notice how far dynamic languages have come.
So what do the little languages that can get from this? It's a decent sized list - a huge range of well written libraries (both .NET and Java have a ton of supporting code available and a lot of it is damn good), a large potential user base (especially for IronPython) and enterprise recognition; while more forward thinking developers know all about the benefits of dynamic languages there are a lot of late adopters that are about to see the shiny things for the first time. And hopefully some of them will stick with it.
So what does this mean for perl, my scripting language of choice? Well, IMHO the advantage CPAN gave is reduced somewhat (and destroyed in certain areas, like XML support and webservices) thanks to the volume of libraries both virtual machines can access. As for Parrot, I've never been a huge fan. I'm not an expert on VMs (and I've never contributed anything to any of them so I've got no right to bitch :) but it always seemed a bad idea to try and build your own when the market had two powerful ones available. On a non-technical level we've just lost a huge potential market, Python has always had nicer feeling Windows integration and I've got a feeling it won't be long until we start seeing it popping up in the MSDN code samples.
Update: I forgot to mention how cool IronPython and XAML are together. Have a look at the August 2006: IronPython screen cast.
Sat, 09 Sep 2006
Three Basic Release Rules
Here are three nice, simple, general rules regarding releases that you should try and stick to. If you don't then you're running on luck and eventually you'll get called while doing something way more fun than deploying yet another bug fixing release.
- No releases on the day before a weekend / national holiday.
- No releases within two hours of the official end of your work day.
- No releases before you go away on holiday.
These should all be common sense (and to be ignored on the *RARE* occasions when something needs to happen right now) but I'm constantly surprised by the number of people that ignore them, make the release and then earn the enmity of their team as people start getting SMS and email alerts from their monitoring systems.
And a bonus trick, if you want sensible release times (none of this six AM insanity) ensure you've got developer and management support in case something goes wrong and you need either guidance or roll back assistance. It's weird how the available windows change when it's other peoples time (and sleep) on the block.
Tue, 22 Aug 2006
Enable ICMP Internally - Or I'll Find You...
When designing internal firewalls and filtering policies *PLEASE* stop and think about ICMP Echo Request and ICMP Echo Reply (the ICMP types used by
ping). If you turn these off you're not really
gaining any real security (especially on your internal network, and to
be honest you want to think long and hard about what turning it off on
the external facing machines gets you) and you're making life much
harder than it needs to be in the long run.
Network diagnostics and host discovery are two simple, and quite common, tasks that become a hell of a lot harder to do, and consume more time and resources, if you turn ICMP off. And it annoys the hell out of new staff as they try and learn about your networks, it also irks people you ask to do you a "quick favour".
FireFox Extension - Disable / Remove Weirdness
I use a LOT of FireFox extensions and in an attempt to slim my install down I disabled the less used ones so I could remove them in a week or so if I hadn't needed them. The first stage was easy, right click the extensions in the Extensions menu and choose "disable". I then carried on using FireFox as normal. I didn't need the extensions removed immediately so I didn't restart it.
A couple of days went past, the browser had been restarted a few times and I wasn't missing any of the functionality. So I removed the extensions thinking all was well. Very foolish of me.
I first started to notice my tabs were acting a little different a couple of days later; after I'd removed the (previously disabled) extensions and restarted the browser. External applications were opening a new tab each time rather than replacing the right most one and other little bits like that. But this only started after the remove, not the disable - which is wrong.
I don't know why, I'm assuming that something doesn't understand "disable" and left the extensions running even though the GUI displayed them as greyed out. And then when I removed them it actually actioned the change. I've not (yet) been able to reproduce it and it's bugged the heck out of me for almost ten whole minutes. I'm mentioning this in the hope that someone else has seen the same thing but actually managed to reproduce it or work out what's wrong.
Wed, 16 Aug 2006
Show Disk Usage in Windows - and a little hack
One of my favourite Windows applications is WinDirStat, a great little utility that breaks down disk usage by file and folder and shows it using a treemap. The tree map is possibly the best way of displaying this kind of information, in addition to the obvious "block size is relative to the file size" you also get colour coded file types (you soon learn to spot clusters of mp3s...) and easy right click access to most of the functionality you'll want to use while investigating disk hogs.
And now the little hack. The Show Usage registry file adds a "Show Usage..." right click option to all folders, if you invoke it then WinDirStat will start running from that point, saving you from opening the application, navigating to your current place in explorer and then running it.
Notes: If you've installed WinDirStat somewhere other than the default you'll need to amend this file manually and set the path to the executable. Be warned. This is a registry file. Running it may eat your machine. If so I don't want to know. It worked for me on a Windows 2003 eval install (don't ask).