Tags

, , , ,

I came into IT at the tail end of the seventies when I joined one of the original computing pioneers.

It was a conservative company lead by veterans, engineers, accountants and sales, with many ties to the US administration, the Department of Defense and intelligence agencies.

My interests at the time were in philosophy, politics and economics. I liked to meet people and talk, and also liked to help people solve real-life business problems, so I was always engaged with the corporate staff and executive management rather than with the real hard-core technicians and engineers.

The thing is, I had no idea what constrained IT, so I never had that baggage when thinking about solutions.

I just made everything as simple as possible, and nine times out of ten the result was great. The one out of ten was always satisfactory.

For six years every business and IT challenge was met, on time, to spec, and to budget.

I was just a corporate staffer (fast becoming budding executive) doing simple stuff to support sales and marketing colleagues and to help clients to move forward. Those simple things just happened to be part of the key reasons why many clients were happy to part with their millions.

At the time I also continued to read a lot. James Martin, Codd and Date, Donald Knuth, Stonebraker, Michie, etc. but the books that I really enjoyed were from the greats of business (people such as Akio Morita) and the founders and innovators of the computer revolution (Univac, Cray, DEC, Honeywell, Burroughs, Control Data, etc.) – detailed in books such as A Few Good Men from Univac.

Elmer Sperry: Inventor and Engineer, was also on my reading list; Mr Sperry was “a quiet, understated and insightful genius and inventor whose influence has been felt by all who passed through the Univac school with their eyes wide open”.

But, at that time, IBM was the company to beat, and that’s what we did, time and time again. We were on a roll, and I felt like I was an integral part of it.

In the summer of 1986 I was transferred to our European Centre for Artificial Intelligence, for what was an initial two year engagement. In Madrid.

We had specialised machines and software for developing AI based applications. The graphic interface was fantastic, and the speed of these workstations was quiet impressive.

One day we were having a discussion about the different patterns available in AI, and how we would use different these patterns or paradigms to solve different problems.

I ended up arguing with a short-tempered guy, a technician who had been converted into some species of evangelist for AI. He was from the other side of the corporate pond, so his point of view carried more weight.

He basically said I was a fool for thinking that you could do AI with anything but specialised hardware and complex software applications and languages. I said I disagreed, after all the architectures of a mainframe and the architecture of these sophisticated mainframes were fundamentally no different.

Okay, so I had read John von Neumann early in my IT life, and he hadn’t heard of the name.

Basically it started this way.

We got into a fight.

“You cannot do Artificial Intelligence with our proprietary mainframe products!”

“Want to bet?” I replied.

“No you cannot!”

“Oh, yes I can”

It went back and forth like this, Punch and Judy style, until the Head of the Centre interrupted the exchange.

“Okay, Martyn, if you think you can prove Bubba wrong, and then I will give you the time and resources to demonstrate what you can come up with. Deal?”

“You bet!” I enthusiastically replied. Not knowing then quite how I would prove that I was right.

“I’ll give you six weeks to come up with a first prototype, then we’ll take it from there”.

“Thanks, boss”. I was now heading up a real Research and Development project for one of the biggest names in IT. No, not Apple or Microsoft, this was before their meteoric rise.

What I didn’t know at the time was this. People had been trying to do what I claimed was possible for quite some time, and without any real success.

But I was like a bumble bee. I didn’t know that in theory I wasn’t supposed to be able to fly. So at the end of the meeting – with a massive heated exchange thrown in for good measure – I flew off and started work on my new challenge.

In four weeks I had the first proof of concept.

I demonstrated the model to the executives at the centre.

Okay, it wasn’t much to look at, but it proved a point.

I had designed and built a simple rules engine, a simple Bayesian hypothesize and test engine, a semantic net engine and a process engine, which I had then integrated into a 4GL environment.

The next step was to create the Interactive Development Environment which would automate all of the application building, database generation, and integration.

This IDE would have the integrated tools that the clients Software Engineers would use to build AI and expert system applications directly on their mainframes, and have integrated with their business applications.

This was actually the easiest part in terms of complexity, because the real multifarious part had already been worked out, up front.

So, several iterations later, we had a robust IDE and a delivery environment, ready to productise.

Looking back I can see that I had been challenged to come up with a strategy and then to execute it. Technical speaking it was a first. But I didn’t care about that, because strategically, it was a success that flew in the face of conventional wisdom.

Opportunities like this are what people thrive on, still, and these early successes create a lot of positive inertia.