Category Archives: Technology

Lost on Mars – Christmas 2003

Christmas Day 2003 was meant to be a special day for the exploration of Mars, and for a charismatic scientist named Colin Pillinger. In the end, it was the key moment of a story that involves loss and rediscovery, innovation, lessons around program management and flawed but inspirational leadership.

A probe called Beagle 2 – christened after the ship that supported Darwin’s famous voyage of exploration – was intended to land on the Martian surface in the small hours of Christmas morning, UK time.

Its destination was Isidis Planitia, a vast impact basin that sits across the border between the ancient highlands and the northern plains of the Red Planet.

Beagle 2 was a low-cost, small-scale and minimalist British spacecraft. It had, however, audacious goals to directly search for life. Colin Pillinger was its Principle Investigator.

After piggybacking across tens of millions of miles of empty space, the craft had detached from its mother ship, the European Space Agency’s (ESA) Mars Express, on 19th December 2003 where it would fall towards Mars on a relentless ballistic trajectory.

A Violent Landing

mars-express_beagle_art_atmosphereentry_1600x1078The plan was this. Beagle 2 would slam into the Martin atmosphere at 20,000 km/hour. After a violent deceleration in the Martian atmosphere, parachutes were to deploy. Then just two hundred meters above the Martian landscape, large airbags would inflate to cushion the final impact of the vehicle. The lander would bounce on the surface at about 02:45 UT on 25 December 2003, and come to a stop.

At that point the top of the lander would peal open, pushing out four solar panels. A signal would then be sent to Earth immediately after landing and another the next Martian day to confirm that Beagle 2 had survived both the landing and its first lonely and cold night on Mars.

That was a prelude to the real science. A panoramic image of the landing area would later be taken using the stereo camera. The lander arm was to dig up soil samples for analysis, and a probe nicknamed the mole would inch its way across the surface. Beagle 2 would begin to make its contribution to history.

No Signal

It didn’t happen that way of course. Instead on that Christmas Morning, there was a cold, distant silence.

A search begun. Throughout January and February, Mars Express, the American orbiter Mars Odyssey, even the great Lovell Telescope at Jodrell Bank would attempt to pick up a signal from Beagle 2. Throughout the Beagle team said they were “still hopeful” of finding a successful signal.

But every time there was no such signal, no sign of the little craft.

The Beagle 2 Management Board declared beagle 2 lost on 6th February 2004. On 11 February, ESA announced an inquiry would be held into the failure of the mission. The results of that enquiry would prove to be highly critical.

Origins

Beagle_2_e_Colin_PillingerThe voyage of Beagle 2 started within the Open University – a British distance learning institution created in the 1960s. Its scientists have been major contributors to the study of a group of meteorites blown off the surface of Mars and the suitability of the ancient Martian environment for life.

In 1997 ESA announced Mission Express with a 2003 launch date. It was then that Colin Pillinger of the Open University team, and a member of the ESA Exobiology Study Group, proposed a lander. The craft would be dedicated to looking for life and conducting chemical analysis of the Martian environment. The name Beagle 2 arose quickly, and Colin himself gave the rationale:

“HMS Beagle was the ship that took Darwin on his voyage around the world in the 1830s and led to our knowledge about life on Earth making a real quantum leap. We hope Beagle 2 will do the same thing for life on Mars.”

The journey had started.

A Very British Eccentric

Colin Pillinger was a larger-than-life figure, and had cast himself from the mould of the archetypal British eccentric scientist. He lived on a farm and possessed “mutton-chop” whiskers that always made him instantly recognizable. Personally, he could be challenging. Professor David Southwood of Imperial College would say:

“My own relationship with him in the Beagle years was stormy … Fitting the much bigger Mars Express project, as I had to, around Colin’s far from standard approach was not easy and he could be very exasperating. Nonetheless, he had genius, a very British genius”.

He could also be inspiring and inspirational. Professor Monica Grady was once one of Pillinger’s PhD student. She would say:

“He was a determined and controversial figure. I crossed swords with him more than once in the 35 years I have known him. But he was enthusiastic, inspirational and never failing in his drive to promote planetary sciences and the science that would come from missions to the moon and Mars. He was one of the most influential people in my life.”

He and his team certainly had a flair for grabbing attention. To put the Beagle 2 project on the map and get financial support, they got the band blur to record mission’s call-sign was composed by the band Blur. The calibration target plate intended for testing Beagle 2’s cameras and spectrometers after landing was painted by Damien Hirst.

Pillinger raised sufficient funds to attempt the mission – although funding was always very light by international standards. A consortium was created to build the probe across the Open University, the Universities of Leicester and Wales, Astrium, Martin-Baker, Logica and SCISYS.

Mars Express launched from Baikonur on 2 June 2003, and on it sat its little disk-shaped companion: Beagle 2.

Analysis of the Failure

In May 2004 the findings of the ESA report were published in the form of 19 recommendations, many of which speak about issues that will be familiar to any student of program management.

It could – and certainly was – read as an indictment of Colin Pillinger’s leadership and management style. The Telegraph newspaper would say the report:

“is believed to criticize the management of the project and blame a lack of testing, time and money for its failure. While he is not named directly, the report is likely to be seen as critical of Professor Colin Pillinger … even before the probe left for Mars … critics of Prof Pillinger warned that Beagle 2 had not been adequately tested”

There were recommendations that covered accountability, and adequate resourcing. Others mentioned the need for appropriate systems level documentation and robust margins to cope with the inherent uncertainties of space flight.

There was also an underlying assumption that Beagle 2 had failed catastrophically. So many of the recommendations covered testing of all kinds, and especially the shocks and processes around entry and landing.

But the truth was that Beagle 2’s fate was a mystery. The probe – without landing telemetry – had simply vanished.

Aftermath

Colin Pillinger continued to display his usual spirit after the report. He said shortly after publication “It isn’t over with Beagle by any means.” He continued to push for another landing attempt, but unsuccessfully.

Then tragically, after a period of ill health, he died unexpectedly of a brain hemorrhage, just two days before his 71st birthday at Addenbrooke’s Hospital in Cambridge on 7 May 2014.

The obituaries were respectful. Just getting Beagle 2 started was seen as an achievement, and many gave him the credit for rekindling British interest in space and space flight.

But he died not knowing what had happened to the spacecraft he had conceived and build.

History is Rewritten

PIA19107-Beagle2-Found-MRO-20140629

But then history was rewritten.

On 16 January 2015, it was announced that the lander had been located intact on the surface of Mars by NASA’s Mars Reconnaissance Orbiter, in the expected landing area within Isidis Planitia. The images had been taken in 2013, but not analyzed until after Pillinger’s death.

The images demonstrated the probed had properly landed and partially deployed, with its parachute and back cover nearby. Some of the solar panel petals had deployed, but not all, preventing deployment of its radio antenna. Beagle 2 appeared to have just a few mechanical movements and one faulty motor from success.

One newspaper claimed:

The history books must be re-written to show that the Beagle 2 mission was a success after the first pictures of the probe proved that it did land safely on Mars, vindicating lead scientist Colin Pillinger.

Dr David Parker, then CEO of the UK Space Agency would comment:

“Beagle 2 was much more of a success than we previously knew. The history books need to be slightly rewritten to say that Beagle 2 did land on Christmas Day 2003.”

Conclusion

In the final analysis, Beagle 2 was a most peculiar space mission – created by a British eccentric and funded poorly. It was also perhaps poorly program managed.

However it was genuinely inspirational, becoming a project that touched people’s hearts and minds. There is no doubt it failed, but we now know  it also came within moments of absolute, joyous triumph. This little, underfunded craft almost worked. As Colin Pillinger himself once said

A little set back like a lost lander should not discourage visionaries.

Images used in this article are from ESA and NASA.

Keith Haviland is a business and digital technology leader, with a special focus on how to combine big vision and practical execution at the very largest scale, and how new technologies will reshape tech services. He is a Former Partner and Global Senior Managing Director at Accenture, and founder of Accenture’s Global Delivery Network.  Published author and active film producer, including Last Man on the Moon. Advisor/investor for web and cloud-based start-ups.

Digital Abundance, and the Second Half of the Chessboard

We live in an time where the rate of change in digital and cloud technology is exponential. The word “exponential” is used a lot,  often without rigour, but in this case the statement reflects reality closely, and the implications are perhaps staggering. We already seeing – and often taking for granted – a rate of innovation greater than any other period in digital history.

Fable of the Chess Board

To understand the extraordinary power of exponential growth, set’s start with the fable of placing rice (sometimes wheat) on each square of a chessboard, starting with one grain on square one, two grains on square two, four grains on square three and so on – doubling each time. The well known question is: how many grains of rice would be on the chessboard at the finish?  The story is often told in the form of a servant speaking with the Chinese emperor, but the tale is linked more clearly to the writings of Islamic scholars around the 10th century, or sometimes to the invention of Chess itself in India.

The final square alone would end up with 2 raised to the power of 63. That is a very large number indeed, and there would be enough rice on the board that placed end-to-end the grains would span the gap to the nearest star, Alpha Centauri, and back again.

Moore’s Law

Gordon Moore working at Intel in 1970
Gordon Moore working at Intel in 1970

The one real place in human endeavour where this type of process exists is IT. It is (of course) enshrined in Gordon Moore’s “law”, where in 1965 he predicted a doubling every one to two years in the number of components per integrated circuit.

Moore’s Law is now a cliché, mentioned in articles and on stages an untold numbers of times. It is not even a law in the normal sense, but a remarkably astute observation. But as a description of actual progress it is real, and remains real.  It also carries through to memory capacity, disk capacity, the number of pixels in digital cameras, and much more. The drum beat of progress is remarkable, sustained, even relentless.

Ray Kurzweil and the Second Half of the Chessboard

Ray Kurzweil
Ray Kurzweil

In 2001, Ray Kurzweil – computer scientist, inventor and futurist – wrote a seminal essay about the rate of change in digital tech that contained the following observations about the rice and chess parable, to illuminate the future power of the Moore’s law process.

It should be pointed out that as the emperor and the inventor went through the first half of the chess board, things were fairly uneventful. The inventor was given spoonfuls of rice, then bowls of rice, then barrels. By the end of the first half of the chess board, the inventor had accumulated one large field’s worth (4 billion grains), and the emperor did start to take notice. It was as they progressed through the second half of the chessboard that the situation quickly deteriorated …. One version of the story has the emperor going bankrupt as the 63 doublings ultimately totaled 18 million trillion grains of rice. At ten grains of rice per square inch, this requires rice fields covering twice the surface area of the Earth, oceans included. Another version of the story has the inventor losing his head.
Ray Kurzweil from “The Law of Accelerating Returns”

In other words, it is in the later phases of exponential growth that the effects become extraordinary, and beyond all common-sense models. Kurzweil uses this as part of building the case for the singularity – a predicted epoch of miraculous tech-driven change – that sits at the ragged edge of futurist thinking.

1958 – 2006

Erik Bryonjolfsson and Andrew McAfee of MIT develop the chessboard metaphor further in their excellent book “The Second Machine Age”.

They take a start point in 1958. The late ’50s were a remarkable and forgotten period of progress in tech, where many of  foundation concepts were created. 1958 also marks the moment when the first use of the term Information technology was made in the Harvard Business Journal .

Assuming a doubling of IT power every 18 months, we entered the second half of the chessboard in 2006 – a year that saw the launch of  twitter, youtube and Amazon Web Services in a form we would understand today.

Digital Resource Abundance

The point that Bryonjolfsson and Andrew McAfee are making is this: remarkable capacity is now available, and continuously increasing, for innovators, inventors and entrepreneurs. Such abundance of resources allows us to have driverless car technology,  smart phones with the capacity of high end PCs of the past, and games consoles with the capacity of former supercomputers. Digital abundance has also led to the first usable voice-based agents such as Siri, vast and responsive social networks, and robots that begin to mechanically move and act in the world like humans or animals. We have data, and potential insight, at scales that stretch our ability to describe in the current metric system. Social Commerce enterprises, like uber and Airbnb, have connected legions of customers and citizen suppliers on a scale that is breathtaking. We have arrived in the foothills of the future sooner than we were perhaps expecting.

Implications for Enterprises

For enterprises, this richness of compute and storage power allows the redundancies that make large-scale cloud computing not only feasible, but competitively essential and inevitable.  For most purposes, it is already inherently cheaper, more (potentially) agile and secure. The new technologies also facilitate new types of business model, new sources of insight on a gigantic scale and new demands from their end clients.  Immediacy in business matters more than ever. As a result, Enterprise IT has embarked on a long period of transformation and change – maybe a decade of marvels and dark dangers. Any organisation now needs to think more about tech opportunity and invention, than optimisation of the server estate, or cost per development hour.

Whether we will create true AI in the next decade, or next century, or ever remains an unanswerable question. The current rush to digital will prove to be part-bubble driven by over enthusiasm.  There will be broken promises, and conventional  challenges around service and costs. Legacy rarely dies, but grows larger.

But what is clear is that the opportunity to invent and innovate grows ever more profound as we move into the next great phase of digital history. It’s time for imagination, and for all technology practitioners to look forward.

Keith Haviland

Keith Haviland is a business and digital technology leader, with a special focus on how to combine big vision and practical execution at the very largest scale, and how new technologies will reshape tech services. He is a Former Partner and Global Senior Managing Director at Accenture, and founder of Accenture’s Global Delivery Network. 

Published author and active film producer, including Last Man on the Moon. Advisor/investor for web and cloud-based start-ups.

Failure is an Option

Things will always go wrong, but excellent preparation and strong leadership can turn failure into a kind of success.

The story of Apollo13 is a parable of gritty resolve, technology excellence, calm heroism and teamwork. For anyone focused on leadership, operations and program management it is absolutely the purest of inspirations.

The film of Apollo 13 centres around the phrase Failure is Not an Option,” invented post the original drama in a conversation between Jerry Bostick – one of the great Apollo flight controllers – and the filmmakers. It summarises a key part of the culture of Apollo era NASA, and it has found its way onto the walls or desks of many a leader’s office. It is part of the DNA of modern business culture, and of any sizeable delivery project.

Damaged Apollo 13 Service Module
Damaged Apollo 13 Service Module

Lessons from the Space Program

But one of the reasons that the crew was recovered was this: throughout its history, NASA and mission control knew that failure was precisely an option, and they designed, built and tested to deal with that simple truth. The spacecraft systems had – where physically possible – redundancy. The use of a Lunar Module as a lifeboat had already been examined and analyzed before Apollo 13. In the end, a old manufacturing defect caused an electrical failure with almost catastrophic consequences. It was precisely because Mission Control was used to dealing with issues that Apollo 13 became what has been called a “successful failure” and “NASA’s finest hour.”

The ability to respond like this was hard earned. The Gemini program – sandwiched between the first tentative manned flights of Mercury, and the Apollo program that got to the moon – was designed to test the technologies and control mechanisms needed for deep space. It was a very deliberate series of steps. Almost everything that could go wrong did: fuel cells broke, an errant thruster meant that Gemini 8 was almost lost, rendezvous and docking took many attempts to get right and space walks (EVAs in NASA speak) proved much harder than anybody was expecting. And then the Apollo 1 fire – where three astronauts were actually lost on the launch pad – created a period of deep introspection, followed by much redesign and learning. In 18 months, the spacecraft was fundamentally re-engineered. The final step towards Apollo was the hardest.

But, after less than a decade of hard, hard work – NASA systems worked at a standard almost unique in human achievement.

So, with near infinite planning and rehearsal, NASA could handle issues and error with a speed and a confidence that is still remarkable. Through preparation, failure could be turned into success.

Challenges of a Life More Ordinary

All of us have faced challenges of a lesser kind in our careers. I was once responsible for a major software platform that showed real, but occasional and obscure issues the moment it went into production, expensively tested. We put together an extraordinary SWAT team. The problem seemed to be data driven, and software related and simply embarrassing. I nick-named it Freddie, after the Nightmare on Elm Street movies. It turned out to be a physical issue in wiring – which was hugely surprising and easily fixed. The software platform worked perfectly once that was resolved.

Another example: In the early days of Accenture’s India delivery centres, we had planned for redundancy and were using two major cables for data to and from the US and Europe. But although they were many kilometres apart, both went through the Mediterranean. A mighty Algerian earthquake brought great sadness to North Africa, and broke both cables. We scrambled, improvised, maintained client services, and then bought additional capacity in the Pacific. We now had a network on which the sun never set. It was a lesson in what resilience and risk management really means.

Soon enough, and much more often than not, we learnt to handle most failures and problems with fluency.  In the Accenture Global Delivery network we developed tiered recovery plans that could handle challenges with individual projects, buildings, and cities. So we were able to handle problems that – at scale – happen frequently. These included transport issues, point technology failures, political actions and much more – all without missing a single beat. Our two priorities were firstly people’s safety and well being,  and secondly client service, always in that order.

Technology – New Tools and New Risks

As technology develops, there are new tools but also new risks. On the benefit side, the Cloud brings tremendous, generally reliable compute power at increasingly low cost. Someone else has thought through service levels and availability, and invested in gigantic industrialized data centres. The cloud’s elasticity also allows smart users to side step common capacity issues during peak usage. These are huge benefits we have only just started to understand.

But even the most reliable of cloud services will suffer rare failures, and at some point a major front-page incident is inevitable. The world of hybrid clouds also brings new points of integration, and interfaces are where things often break. And agile, continuous delivery approaches means that the work of different teams must often come together quickly and – hopefully – reliably.

The recent Sony incident shows – in hugely dramatic ways – the particular risks around security and data. Our technology model has moved from programs on computers to services running in a hybrid and open world of Web and data centre. The Web reflects the overall personality of the human race – light and dark – and we have only just begun to see the long-term consequences of that in digital commerce.

Turning Failures into Success

What follows is my own summary view of those key steps required to handle the inevitably of challenges and problems. It is necessarily short.

1. Develop a Delivery Culture – Based on accountability, competence and a desire for peerless delivery and client service. Above all, there needs to be an acknowledgement that leadership and management are about both vision and managing and avoiding issues. Create plans, and then be prepared to manage the issues.

2. Understand Your Responsibilities – They will always be greater in number that you think. Some of them are general, often obvious and enshrined in law – if you employ people, handle data about humans, work in the US, work in Europe, work in India and work across borders you are surrounded by regulations. Equally importantly, the expectations with your business users or clients need to be set and mutually understood – there are many problems caused by costing one service level, and selling another. Solving a service problem might take hours or days. Solving a problem with expectations and contracts may be the work of months and years.

3. Architect and Design – Business processes and use cases (and indeed users!) need to account for failure modes. The design for technical architectures must acknowledge and deal with component and service failures – and they must be able to recover. As discussed above, cloud services can solve resilience issues by offering the benefits of large-scale, industrialised supply, but they also bring new risks around integration between old and new. Cloud brings new management challenges.

4. Automate – Automation (properly designed, properly tested) can be your friend. Automated recovery and security scripts are much less error prone than those done by people under stress. There are many automated tools and services that can help test and assess your security environment. Automated configuration management brings formal traceability – essential for the highest levels of reliability. Automated regression testing is a great tool to reduce the costs of testing in the longer term.

5. Test – Test for failure modes in both software and business process. Test at points of integration. Test around service and service failures. Test at, and beyond, a system’s capacity limits. Test security. Test recovery. Test testing.

6. Plan for Problems – Introduce a relevant level of risk management. Create plans for business continuity across technology systems and business processes. Understand what happens if a system fails, but also what happens if your team can’t get to the office, or a client declares a security issue.

7. Rehearse Invest in regular rehearsals of problem handling and recovery. Include a robust process for debriefing.

8. Anticipate and Gather Intelligence – For any undertaking of significance, understand potential issues and risks. Larger organisations will need to understand emerging security issues – from the small, technical and specific to more abstract global threats. Truly global organisations will need to sometimes understand patterns of weather – for example: to determine if transport systems are at threat. (I even once developed personal expertise in seismic science and volcanism.)

9. Respond – But finally acknowledge that there will be major issues that will happen, and such issues will often be unexpected. So, a team must focus on:

  • Simply accepting accountability, focusing on resolution and accepting the short-term personal consequences. It is what you are paid for.
  • Setting-up a management structure for the crisis, and trigger relevant business continuity plans
  • Setting up an expert SWAT team, including what is needed from suppliers.
  • How to report diagnosis and resolution – be accurate, be simple, avoid false optimism and be frequent
  • How to communicate with stakeholders in a way that balances information flow and the need for a core team to focus on resolution
  • How to handle media, if you are providing a public service
  • And after the problem is solved and the coffee machine is temporarily retired, how does the team learn

And finally a Toast …

In previous articles, I have acknowledged the Masters of Delivery I have come across in my varied career.

In this domain covered by this article, I have worked with people in roles such as“Global Asset Protection”, “Chief Information Security Officer” and teams across the world responsible for business continuity, security and engineering reliable cloud services. They work on the kind of activity that often goes unacknowledged when things go well – but in the emerging distributed and open future technology world, they are all essential. To me, these are unsung “Masters of Delivery.” Given this is the start of 2015, let’s raise a virtual glass in celebration of their work. We all benefit by it.

Keith Haviland

This is a longer version of an article originally posted on linkedin.  Keith Haviland is a business and technology leader, with a special focus on how to combine big vision and practical execution at the very largest scale, and how new technologies will reshape tech services. He is a Former Partner and Global Senior Managing Director at Accenture, and founder of Accenture’s Global Delivery Network. Published author and active film producer, including Last Man on the Moon. Advisor/investor for web and cloud-based start-ups.

I Saw a Mash-Up of Royalty, Business and New Tech Innovation. It Worked.

Over the last two years I have been working with a small number of start-ups. These are mostly digital and cloud-based, although one is bringing innovation to large-scale consumer goods, and has built an impressive robotic production line near Cambridge. As a result, I have begun to build a classic entrepreneur’s network.

So, a few weeks ago I received an invitation to an event called Pitch@Palace, which is exactly what its name suggests – a start-up demo-day style event that was to be held at St James Palace on November 5th (a day that traditionally – and in this case ironically – marks the Gunpowder plot of 1605 where Guy Fawkes and other conspirators attempted to blow-up the House of Lords).

Pitch@Palace is led and sponsored by the Duke of York who introduces the program on its website with:

 “British prosperity, in all its forms, is central to my work. I want to recognise and reward the people and organisations working to ensure that we have the workforce, intellectual property and entrepreneurial culture to succeed.”

I wasn’t sure what to expect. The event would be well intentioned I was sure. Worthy. But could it be connected to the technical zeitgeist, relevant, genuinely innovative?

In the end, I was simply impressed. Impressed enough, in fact, to write this little post. The event was a job well done by all those involved.

There is something about being in a proper, full-on Palace, of course. The event was held in the spectacular apartments around the throne room – with great ceilings, fine artwork, chandeliers and gilt mirrors. I managed to take my own selfie a few feet in front of the throne – the use of mobile devices being encouraged throughout the event. Pitch@Palace was very well attended and the palace was crowded and full of energy and the buzz of conversation.

It turned out that the pitch day had been supported by a process that ensured the start-ups on show were very high quality. Forty-one start-ups/entrepreneurs had been selected from a network of fourteen partner organisations – tech accelerators, and University and government sponsored schemes. During October, the Duke of York held a “Pitch@Palace Bootcamp” at Central Working Space, part of a huge accelerator facility in the Mile End Road in the classical East End of London, in partnership with Microsoft Ventures, Wayra and KPMG. A panel of judges selected 15 of the start-ups to present.

The main event was kicked-off by the Duke of York. It was the first time I have seen him speak. He gave an urbane, quietly passionate speech about the program – grounded in a real sense of business reality, and strongly encouraging those in attendance to contribute.  It was an introduction that any top flight CEO would have been proud to have made.

Then came a series of three-minute pitches (supported by additional materials available on the web). What was immensely pleasing was the breadth of innovation and ideas on show. Ideas ranged across digital and physical tech, and across the categories of consumer technology, education, environment, medicine, robotics and gaming.

There isn’t space to describe all of the fine fifteen finalists, but I liked Insignia Technologies with smart labeling to reduce food-waste, Pure Marine who aim to crack the challenges of wave energy, Terra Recovery with a mission to mine existing landfill,  Armourgel with a product that protects the vulnerable against injury, Reach Robotics that makes gaming robots, and Insane Logic that through digital apps makes speech and language therapy easily available and affordable. The winner of the vote at the end of the evening was Squirrel who aim to empower low-income employees through digital  tools to manage and save their money.

There were other strong products in the original long-list of forty-one that had their own booths spread through the Palace. I liked See.Sense that manufacture (in Northern Ireland) an intelligent bike light that shines brighter and flickers faster when an internal accelerometer detects change, and so enhances visibility at key moments.

So, part from a good event, what conclusions can be drawn from the evening?

First point: Pitch@Palace emphasizes the way that business innovation and a culture of entrepreneurship have established themselves in the UK, on a strong foundation of tech innovation. There has been a real change over the last decade. The sector appears much more mature than during the time of the original dot.com frenzy at the turn of the century. There is a way to go – some ideas require larger funding that is commonly available early in the UK, and there needs to be more support for the creation of effective channels for new companies (an ex-colleague of mine has created a fine business that does just that). But, overall, we have developed a culture and infrastructure that can create new forms of growth.

Second point: I was impressed with seeing so much hardware and physical product. And some of this was being manufactured in the UK. The UK is now very strong in media and digital production, but it needs to be stronger across all manufacturing.

Third point: Many of the ideas and products presented – by design of the Pitch@Palace process – had a strong social or environmental edge. They were uniformly good business ideas as well. The evening felt remarkably progressive.

So, last night I saw a mash-up of Royalty, business and bright, new tech innovation. It worked.

The Bifurcation of Technology and the Revolution in the IT Industry

Sometimes people start to use a phrase or word that captures a moment of change. You hear friends and colleagues using it, and it starts to crop up in the media. One such example I’ve heard several times in the last few weeks is bifurcation, as a dry shorthand for the current momentous transformation in IT and IT services. The trends I noted in an article (here) in the summer are accelerating, and fast.

A recent, excellent article in the Economist covers this well. The bifurcation is the dual-track nature of growth in IT. Services and products related to mobile and cloud are expanding, and sometimes with extraordinary growth rates. Conversely, traditional IT sectors are growing slowly or even shrinking”. The sectors under pressure include most types of hardware, traditional enterprise software, and classical IT services.

The combination of the differential growth characteristics means the IT industry overall is showing modest growth. The Economist quotes a number of 3% overall. Other commentators will give numbers even closer to zero. It is a challenging environment.

One result of this is the beginning of significant change in the corporate structures of IT suppliers. Larger companies are acquiring faster-growing companies. That is the usual cycle. More profoundly, some large companies will radically reshape themselves. As the Economist describes “HP’s recent decision to break itself up was merely the opening shot … Others will shed businesses that have become commoditised …IBM announced that it will pay Globalfoundries, a contract chipmaker, to take its semiconductor business off its hands.”

The changes in technology driving these changes in business are very real. Over the last 20 years, the relentless increase in available compute power, network bandwidth and storage capacity has moved us to a world where a wide variety of very powerful devices – not always operated by people, but increasing by other machines – can connect reliably to remote services of increasing breadth and sophistication.

And what this means is that such services can potentially take advantage of real economies of scale, and can be built and provided to the entirety of the universe of consumers and business with an ease that a generation ago would have seemed startling.

A new underlying industry architecture for software is forming. It includes a complex infrastructure layer that provides cloud services, which itself faces real change as the concepts of commoditized data centre and commoditized server becomes blurred. It includes a complex range of platform options that link humans and their devices to apps and cloud services. The architecture is crowned by applications and functional services – and it is the richness of these that will accelerate the change in IT. Importantly for established businesses, there is an explicit need to add an integration layer to the architecture – since we are on a decade-long transformation, and the interfaces with legacy systems will be key concerns. Overall, the concepts of Infrastructure-as-as-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) introduced by Gartner have served us well, but need refreshing as this form of architecture becomes dominant.

These changes in architecture also change the expectations for the delivery of software and services. New companies especially want their back-end support systems to be easily, immediately and cheaply available, and provided as elastic services that will grow with them. They do not seek uniqueness or differentiation here. Instead, they want innovation and rapid development of those services that face their customers. More generally, monolithic applications are being replaced by systems of services. This brings more architectural complexity, but it allows development to be parallelized and – if well managed – delivered in much more agile ways.

There are obvious dangers here for the providers of IT services (and you can include IT departments under this heading) who are sometimes surprisingly disinterested in the way they deliver their technology services, although this is often what clients are buying.

For example: a typical feature of large companies is operator dominance where a focus on cost becomes primary – growth in a changing world is much harder and requires fortitude. Taken to excess, the focus becomes optimization of legacy services, and too much focus on tools such as global delivery – wonderful as part of a toolkit, but most effective in combination with client-facing services that bring new technology opportunity into the heart of businesses. Indeed, the best India-based IT providers realize exactly that, and understand that conventional outsourcing now has a limited shelf life.

Another sign of dysfunctional effects are mash-ups of old and new which resemble failed experiments in genetic engineering. We all know of large projects where agile approaches have been introduced at too large a scale to deal with mad schedules, and client and suppliers try to handle this with conventional procurement approaches. Fixed-price contracts and flexible iteration can be unlikely bedfellows.

But real, sun-bright opportunity at scale also exists. An eco-system of service providers has appeared around the dynamic and fast-growing company Salesforce. Salesforce transaction volumes are in 9 figures daily, and much of these are via their platform technologies, showing people are building their own apps around its Software-as-a-Service core.

Another positive example:  I have come across one agile based company that hires the very best developers – aspiring for the top 1% – and undertakes only small projects with direct and strong business support. It seems to genuinely deliver the benefits of agile approaches, with great reliability. This emphasizes to me a coming focus on skills and expertise that can marry client need and the power of new tech. Like all times of change, smart tech-savvy people who understand clients and can integrate complexity will be at a premium.

And the growth rates of many larger consultancies are respectable, or simply plain good – reflecting client needs for advice and support in their transformations.

So, as always I end very optimistically. There are new opportunities for those technology service providers who can develop architectures and architects for the new world, and who can create app and tech services that can be reused across their clients. Good companies of the future will get to grips with better, more nimble ways of integrating, assembling and crafting solutions for clients as systems of services. They will invest in building the new skills and high-end expertise for 21st century delivery – both close to clients and in their global centres. They will create new types of career, and new types of personal opportunity.

I will be writing about these positive trends in future articles. Stay tuned.

Keith Haviland is a business and technology leader, with a special focus on how to combine big vision and practical execution at the very largest scale, and how new technologies will reshape tech services.
He is a Former Partner and Global Senior Managing Director at Accenture, and founder of Accenture’s Global Delivery Network.
Published author and active film producer, including Last Man on the Moon. Advisor/investor for web and cloud-based start-ups.

Steam, War Computers and Social Media: Representing Technology in Film

The London Film Festival is currently on – running from 8th to 17th October across 17 venues, showing over 250 films. As always, it offers an extraordinarily rich program from filmmakers across the world, in a city that has now firmly become one of the movie capitals of the world.

Film, of course, is now a truly digital business. In the last few years, digital cinema cameras like the Arri Alexa have made physical rolls of films essentially redundant, and give very high-definition results of great quality. Production activities like data wrangling and conforming are tasks centred on managing video data that will seem very familiar to anyone who has dealt with configuration management on a software project. And when films are shown in a modern cinema, they will be stored as a Digital Cinema Package or DCP – a standardised collection of digital files. Everywhere there is also experimentation with digital distribution, the art of getting content to consumers in different ways on different devices, and the web’s video services have unleashed a vast wave of low-cost creativity. Technology is transforming filmmaking.

The London Film Festival itself exploits modern technology, including live streaming of its red carpet galas to cinemas across the country and use of its own BFI player for festival related content. London overall is a place where technical skills abound in the new digital crafts around post-production, special effects and 3D.

But this article is not about the digital revolution in the making of film; it is about how technology is increasingly part of the dramatic content of film, and how technology and especially digital technology is represented. The examples are – naturally enough – drawn from the London Film Festival.

The inspiration for the article was accidental. I had booked each film I wanted to see for the normal reasons – I liked the look of the film, or respected the filmmakers, or was interested in the buzz surrounding the movie. But, as I watched each of them – usually somewhere in Leicester Square – I was struck by how often technology intruded, and how often directors tried to find ways of representing technology in general, and specifically n the digital world, on screen.

Mr. Turner

Let’s start in an unlikely place with Mr. Turner. It is a marvellous film, from director Mike Leigh. It is a biography of perhaps the greatest English painter JMW Turner (1775 to 1851) who was known as “the painter of light” and who anticipated both Impressionism and modern Abstract Art.

Set in the first half of the 19th Century, it succeeds at recreating the period with a sense of truth that is unusually powerful – through its authentic and sometimes very funny dialogue, its recreation of the manners and moral temper of the period, and its careful choice of locations.

It shows off Turner’s art of course – and is visually rich and sometimes stunning – but the film also brings to life the man himself: a successful, eccentric and harrumphing curmudgeon born outside the establishment, who then became very much part of it. Turner is vividly played by Timothy Spall, with ungainly confidence and much humour.

But one of the most unexpected parts of the film for me was the way it shows an older Turner experiencing changes in 19th century society, and especially the impact of technology. Mike Leigh successfully conveys a deep sense of the move from the Georgian to the truly industrial Victorian era.

Examples: the means of passage from London to Margate changes from steamer to train during the film. There is a wonderful, funny sequence where Turner and his mistress are photographed by an American master of this new technology. He is armed with the latest cameras and equipment, including a head brace to help with long exposures. During this process, Turner ponders on the future effects on art.

And in one of the grandest sequences of the film, Turner and a group of friends watch the tall-masted and exhausted warship The Fighting Temeraire being towed to its break-up by a steam tug. This was to inspire what is one of the most famous, reproduced and loved paintings by a British artist.

During the scene, one of Turner’s companions looks at the great tall ship and remarks melancholically: “The ghost of the past.”

Turner prefers instead to observe the blackened, low shape of the steam tug: “No,” he shouts back, “The past is the past. You’re observing the future! Smoke. Iron. Steam!”

This presentation of technology, as a set of dynamic changes and images seen through the curious eyes of an artist is highly effective. The film ends up being as much a biography of the early Victorian age – a age of steam, coal, industry and transformation, with the young Albert and Victoria putting in an appearance themselves – as it is a biography of Turner.

The Imitation Game

The Imitation Game moves us directly into the first days of the digital era. Indeed a key moment of the film, set during World War II, is when Keira Knightly pronounces the phrase “digital computer” awkwardly, as though it is being said in the world for the very first time.

The film is a biographic study of Alan Turing, played with suitable coldness and fragility by Benedict Cumberbatch. Turing was a taut, difficult personality often backwards diagnosed as autistic. He was a mathematician, cryptologist, and one of the first computer scientists, introducing many key foundations of that discipline. The name of the film itself is taken from one of Turing’s papers where he develops the concept of the Turing Test. This is a test for whether machines can ever think, and whether they could ever imitate a human mind.

It is a brilliant choice of title, since the film is about deceit at many levels – including the original Enigma codes, the hiding of the success of Allied code-breaking and the passing of false information to the Russians. Above all, there is the hidden nature of Turing’s own sexuality in a time when male homosexuality was a criminal offence.

The bulk of the film’s plot – with quite a lot of dramatic simplification- is centred on the breaking of the German Enigma code at Bletchley Park. The resulting intelligence was labelled Ultra (from Ultra Secret), and Churchill would later tell King George VI: “It was thanks to Ultra that we won the war.” The film itself repeats the common suggestion that ultra shaved two full years of the war.

In the film, the core of this process is a computer-like device – a bombe in the terminology of the time – that eventually is successfully programmed to break German encrypted messages on a daily basis. Turing’s efforts to design, build and operate the machine, and manage the team around it, occupies much of the story.

The visual and dramatic vocabulary that the film uses to describe its technology is taken straight from 1950s Science Fiction. In some ways, this is a perfect choice, since the 1940s wartime acceleration of technology would influence the world-view of the 1950s. So, we have common archetypes such as:

  • The central character of a lonely, arrogant boffin, dressed in tweed, who has a mission to save the world. Indeed, Turing is warned several times in the film to avoid thinking of himself as God.
  • Plain speaking, slight dim military men who’s job seems to be to place obstacles in the way of the hero
  • Sudden moments of inspiration where a chance remark opens the door to the instant resolution of a complex problem.
  • Diagrams and mathematical text assembled in great linked masses showing the workings of another “Beautiful Mind”
  • The great machine itself, a clunking mass of cylinders and valves that rotate remorselessly – like a vision of a Babbage difference engine. It reminded me of the whirl of mechanical computation machines in the classic “When Worlds Collide”.

The vision of technology here is cold, hard-edged and relentless – similar to Turing himself. Overall, the film succeeds – it is a good,moving and watchable piece of work, with strong performances throughout, but perhaps the plot works itself a little too mechanically, simulating the code-breaking machine at its heart.

Men, Women and Children

This film – by Jason Reitman and starring Adam Sandler, Jennifer Garner and a large ensemble cast – is completely contemporary. It is about the lives of middle-class Americans, and how modern motivations and complexities are wired together by personal and social technology. It is based on an original novel by the controversial author Chad Kultgen.

Technology is represented in two ways. Firstly, there is a sequence of digital special effect sequences of the Voyager missions to the outer solar system, narrated by Emma Thompson. The connecting link between these sequences and the rest of the film is through the use of words – at the very end of the film – that Carl Sagan wrote about the “Pale Blue Dot” photo. This famous image was taken by one of the Voyager probes of the distant Earth as a final act of observation. The sequences are beautiful, and the narration sometimes very funny, but this part of the film feels a little contrived and unnecessary.

 The second representation of technology – of websites and social media – is much more relevant and direct. This is not achieved through conventional shots of a PC or smartphone screen, but via pop-up windows on screen that represent what is being shown on a device. These appear beside the main antagonists in the film, popping in and out of existence like speech or thought bubbles. This is effective, and helps the narrative flow. It soon seems strangely natural. It is also provides opportunities for real humour, when people text what they really thinking of the person they are talking to.  

The themes covered are those social issues generated or amplified by technology. One example plotline: the character Tim Mooney (played by Ansel Elgort) is a schoolboy football player of real talent. But he quits the sport – to the vast disapproval of his father Kent (Dean Norris) – to obsessively play online games. He also learns of his absent mother’s new marriage via Facebook. Increasingly alienated, he finds solace in a relationship with intellectual, book-reading teenager Brandy Beltmeyer (Kaitlyn Dever).

Her mother Patrica (played with steel by Jennifer Garner) is one of the strongest characters in the film. Her obsession is the Internet life of her daughter, which she monitors, restricts and controls with total authority before she finds out that her daughter has a secret and rebellious alternative web identity. This sets up a near tragic incident, where Patrica impersonates her daughter to persuade Tom that their relationship is over. As a consequence, Tom takes an overdose that he barely survives.

Another examples of the film’s threads include a teenager so corrupted by pornography he cannot form a normal relationship, a married couple who organise parallel infidelities via websites, a teenager who damages her life chances by putting too racy images on the web, and another – borderline anorexic – who gets advice on extreme dieting from virtual friends on the web.

There is much humour, especially at the beginning, but in the end the film takes a grim view of humanity. However, the representation of the technology works well, and allows parallel threads of plot and meaning to be shown on screen. It is a successful recreation of people’s abstract virtual lives.

Dearest and Rosewater

Both these films are reconstructions of real stories, where technology is part of the story, but not the prime driver. Both were – for me – unexpectedly moving, and illuminated very different cultures.

“Dearest” is a Chinese film, directed by Peter Chan, which covers the sensitive subject of child abduction in China. Although fictionalizsed, it is based on a true story that Chan came across in a TV documentary. It is well acted, humane and gives real insight into the social world of modern China.

Tian Wenjun (Huang Bo) and his ex-wife Lu Xiaojuan (Hao Lei) lose their child PengPeng through abduction. They spend three years searching – using the web as a means of communicating across the vastness of China, and connecting with others in their situation. Eventually, they locate their son in a remote village. The film then – remarkably and successfully – switches its point of view entirely to the heartbroken woman Li Hongqin (Zhao Wei) who has been looking after the abducted child.

In Dearest, the technology dimension is treated entirely conventionally, with the focus always on the actors. It is well made, and a delightful film, but rooted in traditional filmmaking.

“Rosewater” is the story of London-based journalist Maziar Bahari (played by Gael García Bernal) who was detained in Iran for 100 days, while his British and pregnant girlfriend waited for him in London. It was written and directed by Jon Stewart, who was connected with the case.

The film starts with Bahari getting increasingly involved in the events around the Iranian presidential elections, and their violent aftermath. He is arrested and spends four months in solitary at Evin Prison, being interrogated by a “specialist”. Since he is blind-folded, his experience of the interrogator is through the scent of rosewater that surrounds him.

Technology threads itself through the film in two ways. It is shown as one of the motors of change in Iran, with the opposition fluent in use of the web and internet. TV news has also connected Iranian youth to the wider world. At one point, Bahari is introduced to a “university” that is simply a vast array of satellite dishes, hidden from the security forces. The last scene in the film is of a small boy filming the destruction of the nest of dishes by police. He is using a smart phone.

There is also a sequence that starts with Bahari in the depths of despair. He is convinced that the world has forgotten entirely about him. He has been told that his girl-friend has not contacted the Iranian authorities. But then a security guard mentions that Hilary Clinton has been talking about him. In that instance, he realises he absolutely has not been forgotten, and he is actually famous and the subject of much outside debate. That awakening is captured in the use of an animation sequence that shows information and keywords spreading around the world. It is out of kilter with the naturalistic feel of the rest of the film, and reminded me of the use of maps to show travel and the passing of time in films from the 40s and 50s. But it is effective, and a compact means of making the point.

Conclusion

Technology, and a sense of technical change and opportunity is everywhere in society, and everywhere in the world. That is influencing the mirror of film – only one of the films above was directly about the use of technology, but technology pervades all of them. This presents filmmakers with a challenge – especially when the technology is digital. How we represent the drama and rhythm of lives that are part virtual becomes an interesting and essential question. Soon I suspect someone will make a breakthrough film which tackles and answers that question head-on.

I look forward to it

 

New Dangers, Opportunities: mobile, cloud and changing client expectations will deconstruct and reshape IT services.

As in the Chinese proverb, we inhabit interesting times. Disruptive changes in client expectations and the accelerating evolution of technology are remaking the IT services industry. It is a time of long change, bringing challenge and possibility and opportunity. Let’s see why.

Results of the Quarter: Mixed Performance in Outsourcing, New Growth in Consulting

Most of my career has been spent in providing tech services, so I watched the cycle of summer 2014 earnings announcements from the big IT services companies with much interest.

One stand out set of results came from the giant India-based outsourcer TCS. It managed quarter on quarter revenue growth that almost matched the annual growth of some of its major competitors. A headline from the India Business Standard said “TCS Q1 results prove elephants can dance”.

But overall the mood across the sector was muted as the multi-national and major Indian players reported. Other providers did not do so well. There was greater variability in results than in previous quarters. Total cost of ownership and pricing remained major factors for clients of big IT and BPO services. It is a tough market.

Conversely, the feedback I get from speaking to leaders of classic consultancy firms is straightforwardly positive. Many of the traditional players have seen annual consulting growth around 10%, and some upstart new entrants are doing much better than that. There is demand for classic, high-end systems integration skills coupled with new digital capability. There is new energy in onshore recruiting markets and raw competition for the most modern skills.

Given that consulting and system integration have often been seen as traditional and declining business areas, what’s happening? Let’s start by looking at outsourcing and Application Development and Management (ADM) services.

Outsourcing Under Pressure

Classic outsourcing –primarily a global delivery/offshore business these days – remains a huge market. It is also one under considerable pressure, and long-term pressure at that. This partly originates from clients. In recent conversations with board members of client organisations, I have been told often of dissatisfaction with the true value of much of modern outsourcing.

As a result, the market is deconstructing and transforming the offerings it wants from suppliers. Many clients want more control and more value. So, many contracts continue to become smaller and shorter. Other clients seek the ultimate cost solution. There are now a small number of very large, broad and long-term engagements – covering infrastructure, applications, BPO and consultancy, where suppliers are offering intensely competitive rates, and simultaneously buying the client’s assets, or paying a price for the existing IT department.

A Cycle of Renewals and a Battle for Market Share

Importantly, the outsourcing sales cycle is now one largely based on renewals, where clients put out existing contracts for rebid. The result is a ruthless battle for market share – red in tooth and claw. It is a classic commodity market. There will be winners, but the likely long-term outcome is a smaller number of larger players.

I’ve led teams that built market leading cost structures, and that introduced global delivery and productivity innovation at large scale, But any company with strong interests in this market will need to continuously and radically hone its on and offshore cost base, and seek new innovation to drive productivity. There will be times when capital will need to be used boldly to win deals.

Acceleration in Technology

The new activity in consulting on the other hand is part fuelled by shifts and disruption in technology, and the creation of new business model possibilities.

The code word for this is “Digital” of course. It works well as shorthand, and all the major global players have a digital strategy and vision, looking for new growth in what a constrained total services market.

But any supply-side player or CIO also needs to make sure they aren’t simply painting speed stripes on the side of their 10-year old SUV and then stenciling a large ‘D’ on the hood. Digital shouldn’t just be a re-branding of old e-Commerce models. We need to be much more specific about the disruptions, opportunities and challenges.

Mobile-first, Cloud-first

One of the simplest and best visions of the new world comes from Microsoft, and was summarised in CEO Satya Nadella’s recent email to all his employees. He talks of a mobile-first and cloud-first world, made up of billions of PCs, tablets, mobile devices and sensors that run “cloud service-based apps spanning work and life”. The implication is that we should see this world, and its opportunities, as based on the integration of mobile, cloud and applications. The recent tie-up between Apple and IBM also underlines this pattern. Other digital definitions include data, analytics and social tech – vital disciplines – but for me “mobile-first/cloud-first” is the essence of the current tech wave.

Mobile usage already dominates Internet access in some parts of the world. It will everywhere. Cloud moves increasingly to mainstream use. One simple example: There are still teams that take 3 months to provision development and production environments, sometimes because of market regulation. One UK based start-up team I know automatically create their dev environments under Amazon Web Services every morning and shut them down every evening to avoid paying overnight costs. That is a vast difference in productivity.

Indeed, one of the reasons that there is so much enthusiastic start-up activity is the ease of creating the environments to build and run apps. Young entrepreneurs assume the cloud – in fact they live and breathe the cloud. It gives them instant potential reach, and instant visibility,

Software as a Service and the Changing World of Applications

As a concept, cloud starts with reasonably cost competitive and elastic access to infrastructure and platforms. It is also increasingly about access to a rich and developing market of apps and services, under the banner of SaaS or Software as a ServiceIt is this that will make cloud of fundamental importance. Indeed, the fastest growing skill needs I’ve seen over the last two years are precisely around the configuration of SaaS apps like Salesforce and similar.

And the use of SaaS gets bolder, larger and more complex. High-end system integration skills are increasingly needed for cloud integration.

Early in the Life-Cycle and the Growth Curve

Another key insight is that we are early in the life cycle of our mobile-first and cloud-first world. Given the histories of Nokia and Blackberry, mobile is a market subject to fast learning and fast change. We should not assume a world dominated by Samsung and Apple devices. For example, high spec, lower cost devices from China are making rapid progress in domestic and international markets. Other examples of evolution in progress include the current vast human experiment with form-factors, or the large number of emerging technologies for handling mobile payments on the hoof.

Many Platforms, Many Choices

Here’s another important symptom of an immature market: the CTO of a significant, world-class B2C company has complained to me of the increasing differences between mobile universes – iOS, Android, Windows – and the effort required to deploy consistent, high-quality apps across them. It eats too much of his dev budget.

We have simultaneously made it easier to run software, and harder to write it.

We all have folk memories of a simpler world of the 1990s and early 21 century. There was a roughly standard market architecture based around Windows PCs, the Web, a limited number of server types and a small number of dev and database choices of significance.

Now is a time much more reminiscent of the 1970s and 80s. There are major choices to be made: iOS, Android, various incarnations of Windows, Google, Amazon Web Services, Tizen, many choices of language and database, and decisions to be made between classic enterprise software and cloud-served enterprise upstarts. Public cloud services can be relatively expensive for some domains – which means careful thinking and prototyping is important – and billing of cloud services can be complex.

A New Dawn for Architects

People are looking for help. One small start-up I like has created tools for enterprises to build very simple cross-platform mobile apps. They get extraordinary senior access to corporates as enterprises grapple with the new choices, and the resulting complexity.

So one great need, and for IT services companies one of the opportunities, is for informed architects – people who can shape integrated solutions across these platforms, across mobile and cloud, and then across business function, data and social tools. Such thought leaders are needed more than ever. And there is also a market premium for developers who are fluent with the new tech.

Faster and Better and Cheaper?

Businesses have also long lost patience with the cult of the large program – a long-term trend of course, but the new technology seems to offer an additional promise of greater agility, and responsiveness.

The software development model is shifting from something akin to building cathedrals to something more like town planning where a good architecture connects a network of small apps teams delivering in Agile sprints or smaller, more traditional releases.

Many companies are creating digital development hubs that are often onshore. The result is new demand for coding skills. The art of programming is fashionable again, and with web development, individual developers can make a huge business difference. It is likely that key, future IT services will be less based on process. They will be more human.

Given the integration of mobile, apps and cloud, teams are being structured around aDevOps model which infrastructure and application are treated as a connected whole. A new science of project as a service is being created.

Masters of Delivery

This will be important to get right as ambition around cloud-served systems grow. There are already a number of large-project failures that have at their core a naïve approach to Agile. So, we will need a new generation of what I call Masters of Delivery, people with leadership and project management skills able to bring and adapt their insights around scale and managing complexity to the new tech.

These new development approaches may increase speed, but at the cost of some complexity. Systems become networks of cloud services. Projects become networks of apps teams.

There is more opportunity here. Another bright start-up team I know is developing new ops tools for instrumenting and managing applications in the Cloud, They gained customers almost from the first day of business, so large is the need. More challenging will be the creation of better, re-usable architectures that are inter-operable across mobile, cloud (private/public), and enterprise/legacy platforms, but both the need and an enormous opportunity are there.

Putting It Together

To summarize and conclude:

Firstly, traditional big IT services – based on outsourcing and ADM models – remain a large market, but one that is highly commoditized, and competitive. The focus on cost will remain fundamental, driven by competition for market-share.

Opportunity – Transforming Outsourcing

But here is also a gigantic opportunity for new types of service, where human effort is replaced and augmented by automation. In fact, as clients switch to cloud-served apps, the outsourcing model as a whole will need radical overhaul. This will likely be a long journey, given the early and evolving nature of relevant technology, and the fact that building complex software to support multiple client organizations requires real investment. The big IT players have many resources. They will need to use them.

Opportunity – New Integration Services

Secondly, we are all embarked on a ten-year transition to that mobile-first and cloud-first universe. This creates new opportunity, and open space for people and new start-ups.

We will need new tools and architectures to manage and integrate networks of teams, devices, infrastructure and apps. We will need world-class architects to make big choices and work across an integrated stack that links infrastructure, application and business. We will need re-engineered and re-vitalized project management and systems integration skills that can create the project-as-a-service and agile delivery models of the future. And the process of building systems will likely be less process-driven, and more based around human-skills and good tools.

Opportunity – Reshaping the Service Model

IT services companies can themselves deliver these capabilities in reshaped ways and at reduced cost. New types of flexible relationships with employees are not only possible, but often desired. There is an opportunity for the brave to re-invent and upgrade global delivery culture around new aspirations. And course, architectural frameworks, SaaS and automation can be used directly to automate, deliver and enable such services. IBM is already providing online “digital service offerings” across social analytics, inspection of SAP and Oracle systems, and more. The possibilities for creativity are immense.

It is a time of long change, and as always that brings challenge and possibility and opportunity – for individuals, established companies and new entrants.

Keith Haviland is a business and technology leader, with a special focus on how to combine big vision and practical execution at the largest scale. 

Former Partner and Global Senior Managing Director at Accenture, and founder of Accenture’s Global Delivery Network. 

Published author and active film producer, including Last Man on the Moon. Advisor/investor for web and cloud-based start-ups.

Writings from Keith Haviland