[Andrew Orlowski doesn’t support feedback through the normal El Reg comments system, only by private email (I wonder why), so I’ll reproduce my response to him here]

“So who’ll pay for Internet 3.0, then?”

All server hosting companies - and, therefore, the websites run on them - have to pay a network operator for their fat connection to the Internet. The BBC is no exception: though it may have its own data-centre it will have to pay for its pipe to Linx or wherever.

How that upload fee gets distributed to the last-mile, end-user providers is the real question.

According to The Register, Nokia is in talks to acquire a stake in Facebook with a view to “porting the social network on to Nokia handsets in a major way”. The key point of surprise is that while Nokia has close to a Billion paying customers and Facebook has only 50 million (who hardly pay a bean) Nokia is likely to pay Facebook for the privilege!

So why is Facebook worth so much?

The answer, of course, is that it’s not - the large valuations on Facebook are complete nonsense.

Microsoft paid $240M for a 1.6% share of Facebook to keep Google out and nothing more!

But that’s quite a business model for Facebook - keep finding major players in other markets willing to sign up “exclusive” deals. 240 mil here, another 150 mil there would be enough to keep the Z boy in business cards for life.

So Nokia is either being very clever or very stupid, it’s a shame it’s not clear which.

But I wonder if Zuckerberg understands the irony of running a social website funded my companies who just want to exclude each other.

Back in a previous life I was a Research Fellow in User Interface Design at Sussex University.

At the time I remember being totally in awe of, and humbled by, the work of Randy Pausch on Virtual Reality on 5 dollars a day amongst other things and home much energy and enthusiasm that team had.

So it’s enourmously sad to learn that he has an untreatable form of cancer which will take his life in the next few months…

But he recently gave a lecture on “How to achieve your childhood dreams” which should be an inspiration to all. Make sure you watch out for the “mind fake” at the end…

The previous two posts are from my old iprcom.com site. They were still getting hits from search engines so I guess they’re useful enough (to some people at least) to preserve.

I’ve now shut down my old IT Contracting and Services company IPR Computing Ltd. in order to concentrate on my new “email finder” business.

Originally written around 2002

The mysql command can do quite a lot in batch mode. Here I’ll show how to graph the size of a MySQL table (the number of rows it contains) over time with MRTG.

I’ll assume you have a correct MRTG and MySQL installation.

To get the number of rows in a table we can use the COUNT function in a SELECT. To see the number of orders in an example Customer Relationship Management database:

      SELECT COUNT(*) FROM order

Now let’s assume we have a safe MySQL user “bill” with the password “ben” that can read the order table from database “crm” on localhost. In a Linux shell file we can write:

      mysql -ubill -pben -e "SELECT COUNT(*) FROM order;" crm | tail -1

Now we can write a script to be used by mrtg. The output format is

    * Line 1: “In” count
    * Line 2: “Out” count
    * Line 3: uptime string
    * Line 4: Title string

We only need the “Out” value and the title string:


      echo 0
      mysql -ubill -pben -e "SELECT COUNT(*) FROM order;" crm | tail -1
      echo 0
      echo “Table Size”

If we call this script table-size and put it in the same directory as the mrtg config files, then we can add an mrtg target like this:

      Target[order]: `/etc/mrtg/table-size`
      Options[order]: nopercent,growright,nobanner,nolegend,noinfo,gauge,
      Title[order]: CRM order queue
      PageTop[order]: <h3>Number of outstanding orders</h3>
      YLegend[order]: orders
      LegendO[order]: orders 

By using the transparent option mrtg generates images that can be embedded in web pages with a background graphic.

By replacing the first “echo 0” in table-size with another mysql statement, and removing the “noi” option from the mrtg target, you can compare the sizes of two tables in one graph.

Originally written around 2002

MRTG was initially designed to monitor network traffic (hence the name Multi Router Traffic Grapher) - but it is so extensible it can be used to monitor nearly anything!

Here I show how to use mrtg to monitor disk usage on a Unix/Linux box with the df command.
The quick way

I assume you have mrtg installed with the config files in /etc/mrtg

      cd /etc/mrtg
      wget /downloads/df-mrtg.tgz
      tar xvfz df-mrtg.tgz
      rm df-mrtg.tgz

Edit /etc/mrtg/df.cfg and change the “WorkDir” line to an appropriate directory within your website. You’ll have to create the directory as mrtg won’t do it for you!

Then edit /etc/crontab to include the line

      0-59/5 * * * * root /usr/local/mrtg-2/bin/mrtg /etc/mrtg/df.cfg

Wait for two 5 minute cycles to pass. Cron will send two warning messages to the root user containing lines like:

      Rateup WARNING: /usr/local/mrtg-2/bin/rateup could not read the primary log file for df-root
      Rateup WARNING: /home/local/mrtg-2/bin/rateup Can't remove df-root.old updating log file

etc. one each for the first two cycles, and then everything should be fine.

The tar file contains only two files:

      -rwxr--r--   1 root    root         659 May  7 13:58 df-mrtg
      -rw-r--r--   1 root    root         3561 Jun 18 12:41 df.cfg

df.cfg controls the mrtg output

df-mrtg takes one argument: a directory in a disk partition, reads the df info and formats it for mrtg. It reads the disk usage in 1k blocks as mrtg seems to use 32 bit integers internally - i.e. it can’t deal with big enough numbers if you try to report gigabyte disks in bytes!

If you want one of the partitions to be displayed as the default page then edit df.cfg. For example, to display the /home partition by default change the 9 occurrences of df-home in df.cfg to index

This has been tested on a Sun Cobalt RaQ3, but should work well with only minor changes, if any, on other Unix systems.

Let’s see how quickly an address harvesting spider picks up this address [email protected] - note, you won’t get a reply if you send to that :-)

This is re-write of a post I’d originally produced for the internal blog where I work. I wanted to bring it out into the public, so to speak, as I may have a sequence of general thoughts that start from here.

—-The 80:infinity rule - and a plea for the future

One of the problems with the “everything should be open/readable unless specified otherwise” premise favoured by the more vocal in the blogosphere is that security is virtually impossible to strap on as an afterthought module. The security functions needed to implement chinese walls, Sarbanes-Oxley and other contractual constraints – i.e. the “triple A”: of Authentication, Authorisation and Auditing - often (always?) need to be in the core design of a tool or environment to be successful, even if they are usually turned off for collaboration.

Which brings me to the 80:infinity rule.

The joke goes: “the last 20% of a project takes 80% of the time, unfortunately so does the first 80%…”

But with modern RAD/Agile/nom-de-jour tools the first 80% can be done very quickly: within days, hours or even minutes (depending on how well the demonstration is rehearsed :-) But in my experience the last 20% is where the interesting stuff happens, and the more bling is devoted to the first 80% (to impress a gullible management) the more likely the last 20% will tend towards infinity.

With vendor products that means being locked into “rolling beta-release”, bleeding edge, and missed deadlines for promised functions.

Does that sound familiar? Is there at least one environment in your workplace evaluated only on its first 80%… And as support engineers and developers who’ve had a system dumped on them know, it’s the last 20% that causes the most pain.

In the enterprise where I work I’d guess the last 20% includes things like: AAA, proper ldap / enterprise directory integration (no, not just Active Directory), speed/scalability, redundancy/resilience, reporting, ownership/traceability (relates to AAA), integration rather than synchronisation, usability etc.

Getting that last 20% correct, right from the beginning, can have a far greater impact on project’s bottom-line budget than the first 80% ever can.

So, my plea for the future: if you`re in a position to make tool choices, ignore the first 80% as any fool vendor or contractor can implement that. For successful purchases and environments evaluate for the last 20%… *

* as they say in Southpark, “Won`t somebody pleeeese think of the children”

“Every moment in planning saves three or four in execution” - Crawford Greenwalt

Although I can see the initial appeal of claimid, I can’t for the life of me see how it can possibly scale. I mean, how many Ian Rogers’ are there?

I’ve had this domain since the days of steam-powered internet, and even wrote up some vanity topics, but then reality caught up with my fiction and online diary-keeping is now called “blogging”.

So I’ve bunged WordPress into the site, which will make it easier to update then firing up NetObject Fusion (though I don’t think I even have a working copy of that any more…)

Whether or not I have anything interesting to say is another matter :-)