Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Thursday, September 03, 2009

iPhone and the future of personal devices

The future of personal devices is iPhone?

Apple's iPhone made it to the cover of Time magazine in 2007, the year it was released.  Now it has made the cover of New Scientist.  Why?

Because it's in the process of filfilling a vision I - and many others - had long ago.  That vision was for a ubiquitous device that would meet all one's digital needs - anywhere.

That shouldn't have been too hard - in theory.  I once had a PDA (personal digital assistant, a pocket-sized computer) that ran on Windows CE.  You beaut, I thought at the time, it's Windows-based and actively supported by Microsoft, so it will be a significant platform into the future, there'll be plenty of software for it, it will do everything.

That PDA only partially realised the vision.  It held music, photos, videos, spreadsheets, documents... and could connect wirelessly to the internet.  But as a general device, PDAs never grabbed the mass market's imagination in the same way that PCs, mobile phones and the internet did.

What happened?


First, a market overview.

Worldwide, there are already more mobile phone services than landlines (2005 figures, for example, were 2 billion vs 1.2 billion).

Further, there are are already more mobile phones being sold than any other devices (according to the above reference, 2005 sales were 830m mobiles, 210m desktop/laptop computers, 100m game consoles, 90m digital cameras).

Even earlier - 2004 - sales of Smartphones (integrated phone/PDA functionality) had overtaken standalone PDAs.  As of 2009, they constitute one in every seven mobile phones sold.

Back in 2004, as far as operating systems went, Windows CE was market leader at 48%, well ahead of Palm's 30% and 20% for RIM's Blackberry.

The picture is rather different now.  Latest worldwide figures (Gartner, Q2 09:
  • Symbian 51% (Nokia, Motorola, Sony/Ericsson, and others)
  • RIM (Blackberry) 18.7%
  • Apple 13%
  • Microsoft 9%
  • Android (Google's new platform) 2%
  • Palm <1%
(The bulk of the rest  - 5% odd - is Linux-based)


Ultimately, PDAs were superseded by connected devices, driven by that consumer product of choice, the mobile phone.  That's not the end of the story, but ubiquity in market penetration is gradually leading to ubiquity in functionality, and Apple is leading the charge.  The iPhone may not be the market leader, but the breadth and volume of applications is world-beating - and that phenomenon is illustrated impressively by the New Scientist article.

The other part of the equation is the loading up with communication capabilities - especially location awareness, which has sparked a surprisingly large and imaginative range of applications.

Microsoft did have a vision for its operating system which encompassed both PDAs and smartphones, but execution failed.  Although they pushed it quite strongly through their developer community, their market share is inexorably declining because they simply never caught fire with the wider public.  That especially is where Apple shines, generating momentum that fosters further innovation.

On the downside, Apple is prone to restrictions (to hardware, software, and connectivity enhancements) that are aimed at protecting its turf, including brand image, in one way or another.  That has been a downfall in the past, and a caveat that may yet unseat their current ride to glory.

The original vision stands largely fulfilled, in actuality or near-term capability.  Yet for me there remains an annoying gap in data entry.  How to transfer your digital world into the device, especially when mobile.  Fitted microphone and camera can go only part of the way.  In my original PDA, the issue was partly addressed with a fold-out keyboard that the device could be plugged into.  Still, typing as a paradigm leaves a lot to be desired.  Waiting for the next leap forwards...

Thursday, August 27, 2009

SETI, Open source, and the socialisation of productivity

What does SETI have to do with Microsoft's furrowed brow?

We all know the Search for Extra-Terrestrial Intelligence, whereby the universe is scanned for signals throughout the electromagnetic spectrum which can be interpreted as originating with intelligent life. Some of us have run SETI@home: you download a screensaver, which runs in the background, borrowing your unused computer time to run a parcel of number crunching for SETI. Everybody wins: only your idle computer time is used, and it can have some wider community benefit - you may even be responsible for the first discovery of extraterrestrial life.

That was the first distributed grid computing project to gain widespread publicity. But the software is now available to turn any general project requiring major computer time into a socialised project. The Herald recently ran an article on Australian use of such software: specifically, BOINC, The Berkeley Open Infrastructure for Network Computing. The article said over 32,000 Australians were currently running BOINC projects, out of 1.7 million people worldwide.

The scope is tremendous, not just for general scientific research, but also for any community-sector project that may not otherwise have the resources to get off the ground.

For the moment, here's a list of projects you may wish to take part in. Those are all scientific research, mainly in biology, physics and maths, but there's also a World Community Grid, which is specifically aimed at humanitarian projects.

As for Microsoft, the other side of community computing is software: open source, to be specific: generally an open source project is contributed to by many, with no profit-oriented copyright - and generally available for free. Open Office may be the most famous - a direct competitors to Microsoft's Office suite. And as a method of developing software that is freely available to all, it has gained acceptance in most areas of my professional focus, business intelligence. Apart from the well-known mySQL database, there are also open source tools available for most related areas. As well as database and BI software, there's also ETL, data profiling, and so on.

Over time, you should expect prices to tumble in all types of software directly affected by open source initiatives. Yes, the likes of Microsoft can expect some buffering from these forces due to brand-name strength. But yes too, Microsoft is worried enough that they are already working on alternative revenue streams, including jumping into the cloud. Those alternatives shouldn't see a collapse of capitalism any time soon, but the long-term trend can only benefit the public, particularly those who might not otherwise be able to afford such computer resources, particularly in the developing world.

In a wider sense, distributed computing and open source are simply harbingers of a globalisation and socialisation of productivity, for the benefit of all.

Friday, July 31, 2009

Technology enables rare books

Fed up with not being able to get hold of that out-of-print book?

A new partnership shows a model for future access to rare (or small-run) publications - using three different aspects of technology.

The University of Michigan is running a project to digitise its collection. It has partnered with Amazon to offer access to their collection of 400,000 rare, out-of-print books. Amazon had already bought a print-on-demand company called BookSurge; Amazon itself acts as an aggregation point for millions of titles, and its high profile will best enable sharing of the University's collection.

The way of the future: an aggregation point plus a demand publish service, plus the archival content. Cool.

Monday, July 20, 2009

Insights into teenage tech trends

A teenager on work experience for a bank has written a 'research note' on media technology that has reverberated around the world.

This despite the fact that the research was not quantitative: he consulted a few friends, then wrote up in one day a paper that has been praised for its [anecdotal] insights.

The bottom line was that in their formative years of consumerism, teenagers are great adopters of digital media, but take issue with both cost and advertising (don't we all?, but the sensitivity is much greater). They adopt most media because it doesn't cost, and only some media by necessity. Texting [and phone calls] and cinema are the main media that are paid. Music is either pirated, or consumed from internet stations for which they have more choice of content, and no ads.

in: good mid-range mobile phones (can be obtained at birthday/xmas), Facebook, Internet as search/reference resource, viral marketing (word of mouth, so to speak), cinema (esp while at kids' prices), game consoles
out: Twitter (costs), radio (ads), regular tv watching, iTunes or similar, newspapers

Teenagers will carry many habits into adulthood; the fact that this is practically the first generation to have access to a wide range of digital media technologies suggests some significant adjustments in markets and advertising are afoot.

Morgan Stanley's comments are largely limited to: "[the] influence on TMT [technology, media, telecoms] stocks cannot be underestimated." But I can't resist making a few observations of my own.

Since all this suggests it will be harder to wean people off a non-payment habit, the implications are good for innovation in areas that benefit from large-scale commodification (commercial markets such as devices and mass-market pop music), but bad in areas that commodification detriments (art).

Implications are particularly bad for most music artists, as they will find it harder to earn money from selling content. Particularly at the medium to low volume end, this can profoundly affect the market, as far fewer people will be able to make a living from their music. At the top end, there will be greater pressure to tailor music for the paying market, which may drive the quality even further down to find a lowest common denominator that sells in sufficient bulk. Yet ultimately the ramifications are not all bad, if you consider the New Zealand model. Being such a small market, the expectations of most New Zealand musicians in the 1980s were not high in an income-earning sense, so in a way they made music with less regard to commercialisation. As a result, New Zealand was a hot-bed of innovation*.

Implications are also bad for advertisers, who will not easily reach people as they pass from a consumer stage of life to an earning, choice-laden lifestyle. Most traditional venues for advertising are ignored, particularly print (however, since the writer was male, he may have understated the appeal of girl's/women's magazines).

Consumer technology in the last couple of decades has been driven by mass commodification of technology, which led to great innovation in areas of affordability, for example home computers and mobile phones. The suggestions are that hardware innovation at the consumer end will continue to be focused on mobiles, computers and gaming.

The report was written in London by Matthew Robson for Morgan Stanley, and can be read here.


*spoken in a past tense because I have no recent knowledge of that market.

Thursday, July 16, 2009

Web: easy as 1.0, 2.0, 3.0

I have a memory of a W3 Consortium seminar in Sydney several years back. It discussed their efforts to put meaning into web content via The Semantic Web, using a concept/relationship mapping language called OWL, which uses RDF (a metadata descriptor) and XML syntax.

They intended to arrive at a structure that was universally navigable mechanistically (by computer), yet retain for each specialty area its own language/concepts. Yes, it was developed by academics, for academic applications.

This was before the concept of Web 2.0 was sufficiently popularised to gain a solid meaning. At the time, I believe they used the term Web 2.0 to describe their endeavour.

Times change, meanings change. The term Web 2.0 has been usurped for another purpose, and it looks like the W3 Consortium is now using Web 3.0 instead. At the current state of play, the simplest description I have seen of the evolution of the web (from Jean-Michel Texier via Peter Thomas) is as follows:

* Web 1.0 was for authors [ - to be read]
* Web 2.0 is for users [ - fosters interaction
* Web 3.0 is also for machines [ - fosters automation]

In effect, Web 3.0 should enable more rigorous discovery and collation of information from the far corners of the web. Something like what Google should be, if it had the full smarts. However, it would only work where web content authors added the background tags and capabilities - so it's more likely to be taken up for knowledge/information-building purposes, such as research, reference materials and databases. But this is a deceptively powerful paradigm, and the sky's the limit for assembling useful meaning. The current Google would look like a paper telephone directory... but by then, Google would have evolved to make full use of it. A fully referenced assembler of knowledge, rather than isolated lumps of unverified information.



PS If interested in Data Quality in a technical, database sense, see my latest tech post. (This one was intended for a generalised audience!)

Thursday, July 02, 2009

Facebook as a tool or timewaster

The internet is a great time-waster. Facebook even more so.

Ostensibly a networking site, its best value is as another tool to allow people to keep in touch with each other despite the inevitable tyrannies of time and distance. While snail mail can be personal, mobile phone direct and email easy, Facebook is a permanent placeholder that can be as low maintenance as desired, or generate a constant stream of traffic between people or groups. Yet the bottom line is that one can easily check up on what a friend is up to - without necessarily having to engage. And you never have to lose contact. In fact, solely through Facebook, I've restored a number of contacts I thought were lost for good.

But you get all types. Facebook is also good for communicating interests. But some people almost live on that site, sending/receiving a steady stream of actionable items - questionnaires, games to play, and groups to join (thanks, Andrew, for all the worthy causes - and they are all good, especially the group declaring Steve Fielding to be not real - but there's only so much time in the day...)

I also get a number of friend-linking requests from people I don't know - largely friends of friends, I guess, with a smattering of mistaken identities. I ignore anyone I don't know well. It keeps the names down to a manageable number. Not like some who collect names with little discrimination, and get lost in the morass.

Recently I saw a (Wiley) cartoon - quite funny, but sobering - that typifies for me that rush of blood to the head that accompanies Facebook, and the lack of discrimination that dilutes its use as a tool.


LinkedIn is the equivalent for professional communication. The equivalent collector there is the professional recruiter - many are casting wide their search for names with apparently little planning on how to use LinkedIn as an effective business tool. I accepted a few invitations to link before I realised I was going to get swamped. So with due respect to recruiters (as a contractor, I'm constantly looking for work), I no longer accept invitations from them.

Thursday, May 14, 2009

Windows: pirated, owned, bugged

Interesting to hear news that botnet malware is being found in pirated copies of the new Windows 7.

Here's the translation. Windows 7, not released yet, has been getting better press than the previous Windows Vista. Beta copies have been released by Microsoft, including the latest one available free from their website for the past few days. It's called "Windows 7 release candidate", and it's probably going to change very little for the proper release later this year.

But beta copies are pre-release, given to people for testing purposes: there could still be a few bugs in it. And W7rc is available from Microsoft's website, free. So why would anyone go for a pirate copy? In this case, they're being downloaded from a peer-to-peer, bit torrenting site. This means a much faster download: the bitstreams come from a number of sources, and so it's likely to be fast, not being dependent on a single server - or Microsoft's web site, for that matter.
Moreover, there'd be a number of people who are so used to downloading from such fast sources that they'd source their needs - licit or illicit - from there.

But some enterprising soul has hacked the W7rc code, just days after it was released. And they inserted into it code that compromises the computer it's loaded on, rendering it part of a botnet - a network of compromised computers that could be hijacked at will for any number of nefarious purposes, such as emailing spam or partaking in attacks on other computers (eg DDOS, distributed denial of service). And that hacker figured others in the shadow internet world would be sufficiently tempted.

And they were. Not only downloaded, but loaded, operational, and calling home to the specified target for orders. Dambala, an anti-botnet organisation "managed to grab control over" the server the hack was directed to, and noted that a peak of 550 infected computers per hour were calling in.

Lessons?

It should be one big bounty for Microsoft, in its quest to get everyone to pay them money: "But a genuine copy, or you'll get infected/compromised". This concept should also provide an even better bounty for evil hackers. Why stop at botnets? Why not a hack that allows for eavesdropping, so you can grab a user's personal information, hopefully bank accounts, etc.

Piracy will never be the same again.

Tuesday, May 12, 2009

3D TV now - at CeBIT

CeBIT Australia is a business technology exhibition event, an offshoot of the one in Germany, which is the largest in the world.

It can be somewhat dry however, despite attempts to jazz it up with bright colours, baubles, takeaways, giveaways (especially iPods, about a year or so back), lollies, and - this year - the surprising appearance of a few women with obviously more experience with - well, appearance, than say software skills or hardware marketing.

Yet one of the most arresting sights this year was hardware. The company was a Korean one called Pavonine, and it was marketing a range of 3D screens called Miracube. The first screen I looked at required special glasses. And the film loop it showed was quite impressive - albeit it reminded me of a recent 3D cinema release I took the kids to, Monsters vs Aliens. The usual tricks of objects projecting out of the screen, and the occasional item flung out at the viewer. That film was more effective, but it was still quite impressive to see it on the small screen - 3D TV at home, potentially.

But the second screen was even more impressive - it was displaying 3D with no need for special glasses - naked, as it were. A real wow effect.

There were some caveats, however. The loop displayed on the screen was a selection of static images - yet still in 3D, with some features projecting out of the screen. However, to maintain the effect it was necessary to focus in a certain way at the screen. It was rather like that fad about ten years ago for printed 3D pictures. The trick there was that the backgrounds were always patterned; the final image was actually generated by computers to achieve the effect; and you had to focus on the page in a certain way. Many people found it difficult to see the hidden 3D image, but I usually had no problem - it was a matter of relaxing the gaze: in fact, it was achieved by moving the focal point of ones eyes away from the surface of the page.

I'm not sure whether this screen worked in the same way. I could tell however that the effect was achieved by providing different images - based on horizontal lines - to each eye, exploiting the (relatively small) distance between the eyes. Yet if I moved slightly, the effect was lost and the image jarred; I also had to maintain a particular focus to achieve the effect.

It would be great if the effect could be achieved without the riders. Although it was a bit of a strain in the end, the effect was really awesome. However, if I'm right and it is based on vertical lines, it may prove a problem translating the technology to a broadcast situation - which is, I believe, based on scanned horizontal lines.


Later, as luck would have it, the first person I started describing this technology to was Adam, whose stereoscopic capabilities are negligible, since it looks like his eyes focus differentially - that is, they don't work together. He said the best way for him to look at 3D was to close one eye - which rather defeated the effect.

Wednesday, December 10, 2008

Future tech: the possibilities in mapping

In the course of a presentation on mapping technologies yesterday, quite a few interesting applications came up.

Mapping technologies freely available today include Google Earth, Google Maps, and Microsoft's Virtual Earth (available to the consumer as Live Search Maps service).

Live Search Maps is a cut-down equivalent to Google Maps - and of less value in Australia thus far. But beyond a simple map service, these technologies have more meaning behind the scenes - in what can be done with the underlying technologies. The mapping engines of Google Maps and Virtual Earth can be used in a variety of contexts, some rather distant from the core consumer services provided. For example, I was told of Virtual Earth being used to navigate ultra-high resolution images of human eyes. In effect, the technology has been transferred to a very useful medical application. By extention, the possibilities are endless.

Under the hood, the technologies simply constitute mechanisms to navigate through a physical landscape of any dimensions or locations. No reason this can't include (with the appropriate data sets) maps of the moon, Mars, the known universe, right on down to any physical form that has been represented in sufficient detail. To this can be added third-party data for a variety of purposes. This is already being done to plot specific sets of geographical points, but it can also include representations of weather information, 3D rendered objects, older photos or created/imagined photos. You could thus superimpose on the present a planned future (and so see a full context for this new wing for Sydney's Museum of Contemporary Art), or even an imagined future. You could superimpose the past. It could be quite valuable for analysing history or archaeology. You could also look at a putative past, such as a different plan for the Sydney Opera House.



In a broader sense, this is a demonstration that technologies that have emerged over the past five to ten years are likely to have a much more profound impact on us than some of the comparatively trivial applications available today. If it is surprising that the free distribution (of much of these technologies) is viable in a business sense - and much of it has proven so - then it may also surprise us what we will be able to do with little effort and no cost in the future.

Tuesday, September 23, 2008

Future cloud computing Googlified

Google's official blog discussed cloud computing 10 years hence - far more eloquently than I did recently.

In a nutshell, most computing power will come from web-based services (effectively, Everything-as-a-Service), and our own computing resources will be mere devices that hang off the cloud. Not quite like the dumb terminals on mainframes of yore, though. They rightly see continued exponential growth in the three mainstays of power: processors, storage, and networking (the essential plumbing). Our devices will certainly be powerful - but not a shade on cloud resources. (I see the power in local devices being chiefly used to drive our interaction with the cloud, in the long run.)

They see a great plethora of devices hooked up, many of them far smaller and more specialised in application than our typical laptops/desktops.

They also dare to speculate on the smarts - intelligence - built into "the cloud". Read it all here.

Thursday, June 12, 2008

Climate change and oil: an ironic confluence

It could be speculated that if the current oil shock had come about fifteen years earlier - say when Bush senior was waging his war - our global environment would not have such a drastically unhealthy prognosis.

Aside from the effects of maintaining war in Iraq the American way, this incredible rise in oil prices has other roots, particularly in the rapid rate of industrialisation in China (and to a lesser extent India). The irony of timing remains: if a few factors in the course of human history - or planetary composition - were tweaked, we might be switching from fossil fuels before the irrevokable global damage that we are busy causing.*


But we're caught unaware. Due to soaring petrol prices, Sydney's seeing a sudden strain to public transport infrastructure, after decades of favouring cars at the expense of rail. In Spain, the government shows signs of bowing to pressure from truck drivers. And despite governments across the world turning around on the issue, the pace of policy change is far too slow to match the urgency of the problem.

Belatedly, this shock has slightly increased the rate at which we are moving away from fossil fuels. But it's not enough, and precious lead time has been lost. Further, despite some arbitrary claims that we have reached a time of "peak oil" (which would deliver its own shocks), in reality the science and the economics is not incontrovertibly there, as it is with climate change. The surge in prices only makes it more lucrative to explore for and extract fossil fuels.

If the stars were in fully fortuitous alignment, we'd experience our current rapid technological spurt first, followed by an oil shock, followed by global warming danger. These factors are strongly intertwining, but the timing is off, and our mettle is being tested so harshly that one might stop to think there were no heavens to guide us. Our leaders are tested, but just as culpably we ourselves, who vote in those leaders and who wait around for others to take action or for governments to legislate to force our hand.

The Iraq-specific factor in the oil shock is temporary. But the galloping industrialisation of China is not.


*When I say our damage to the planet is irrevokable, I mean that the Earth has recovered several times in the past, however recovery doesn't happen on the scale of human history - it's in the millions of years. So this human-caused event is an "in our lifetimes" type situation.

Thursday, May 22, 2008

Tech: Telecomms present and future, from Paul Budde

Paul Budde's BuddeCom is an Australian organisation that analyses telecommunications markets around the world. He was presenting again at CeBIT this year, and was very worthwhile to listen to, as always. His themes are usually around competition and the legislative environment, and so inevitably he rails against the current Australian regulatory regime, as well as Telstra's special position in dominating the market.

The following are some notes I took from one of his presentations. Many of the figures quoted derive from his organisation's own research, which carries sufficient gravitas for the US government to renew its contract to buy all of BuddeCom's annual cycle of reports - and those reports are many and come with a hefty price tag.



The Australian telecommunications market is still growing, however the rate of growth is dropping (in dollar terms) as it moves from a voice- to a data-based paradigm.

Telstra still has about 65% of the market (70% of the wholesale market), but that share will continue to decline. (Budde noted that growth in the wholesale market was rather stagnant in Australia compared to more mature markets such as the European Union.) The total market grew by 5.2% to $AU36.6b over the last year.

Into the future, he sees an additional channel for telecommunications, including broadband, being via the electricity distribution network. (This is not out of keeping with Optus' prognostications - a year ago, I heard their chief technologist saying something similar.)
He also sees health-related applications - eg diagnosis) becoming a significant part of internet- based traffic - as much as 25%.

Mobile phone penetration has pretty much reached saturation: to 110% of the Australian population, indicating a noticeable number of people appear to have more than one active phone. The Average Revenue Per Unit (ARPU) has stabilised to $46.70 (presumably per month). This will not change significantly due to 3G services - which will reach 8m (45% of the population) by the end of the year - because further market penetration of 3G is hampered by price. And so price drops will have to accompany increased market takeup. Budde commented that the growth in 3G content was handicapped by the great reluctance of existing players to open up their network to third-party content providers.

Budde noted that one of the most profitable mobile operators in the world - India's Bharti () - has very low charges at about $5 per year plus 1c/minute call charges.

He also stated mobile market penetration in Africa - the poorest continent in the world - was at 60%! In fact, he told me later that the actual figure arrived at by his analyst was 80%, but this was based on government figures which he didn't trust, so he discounted it to 60%. Two comments he made to me that help explain this are that a) this was not uniform, and there were quite dramatic differences in takeup between different countries; b) the figure was based on mobile servives - which often amounted to simply a sim card, which owners might take to someone local to rent a phone to make the actual call; c) trelecommunications in Africa is a significant economic tool, and mobile infrastructure may be present even when more fundamental infrastructure such as roads was poor. I suggested it also reflected the fact that mobile infrastructure is often quicker and easier to put in place than fixed lines; Budde didn't disagree with this, of course.
Budde expects the number of broadband services in Australia to go from 4.5m to 5.5m by 2009. An ongoing message of his has been that much of what we call broadband in Australia is actually internet connection at relatively low speeds of 1 MPS or less. However, increased takeup again hits affordability issues, and he felt there was a fair way to go, by international standards. He reckoned it would take a price drop to $39 per month to get takeup up to 80%. Broadband costs _are_ falling, but they're still quite high by international standards.

He expected the telecomms boom to continue to at least 2015, with data overtaking mobile services as the key driver of growth.



Budde also made a point that a gradual drift away from tv- watching is leading to an 'unleashing of minds'. That is, passive entertainment is giving way to passing time in a more self-governing, engaging form of passing time. He mentioned, as an example of this, Wikipedia, which he said represented 100,000 hours of human labour.


I'm not sure about that last figure (I would have thought the number was much higher), but I'm inclined to take issue with Budde's more general point.


It is true, there has been a marked drift away from television, which has the networks worried. And in parallel there has been an increase in the amount of 'viewer' time that has been absorbed by the internet. However, television as a passive medium is likely to be just the ticket for a large number of people, at the end of a day. The drift to net-based entertainment may currently be taking place amongst those people who are less inclined to settle for passive entertainment, and there may be a limiting boundary to that drift of audience. Moreover, it's possible that an admixture of the two may come to dominate (albeit not exclusively) our collective attention. That is, internet-based delivery of passive channels, such as we see with broadcast tv, but with far greater choice of viewing material. And there will always be those who will be happy to do what is easiest of all: turn on an appliance (whatever it becomes), sit back, and watch.

Thursday, May 15, 2008

Bruce McCabe on Future Technology


Every couple of years I have the pleasure to hear a talk by Bruce McCabe, and each time he presents a riveting vision of our technological future.

McCabe is a Sydney-based industry analyst who, through his company S2 Intelligence, gives vision to a number of large organisations, including all Australian governments at State and Federal level.

The following increasingly terse narrative is drawn from my notes of his speech, in which he canvassed a number of technological developments that are already present, and will be 'disruptive' [to businesses] in the near to medium term. My comments (or elaborations) are in square brackets. Throughout, I noted that most of these developments have very significant privacy implications. Any errors or omissions, blame me.

The sections below are Video, Storage, Voice, Human Networking, Image Processing, Spacial Media, and Sustainability Monitoring. I leave it to you to surf the sites mentioned; I have not yet had time to go through them all.

Video
Although currently mainly consumer-driven, video over the internet will be increasingly geared to business needs; it will account for 98% of internet traffic in a year or two. Technology is already available that can index the content of video clips, and so that content of videos will become searchable. Use of video will become more structured [and commodified] to the point where they can be treated similarly to text-based objects, including cutting, pasting, and hyperlinking to the middle of a clip. Moreover, there will be automated [computer-, not human-directed] analysis of clips - news in particular. Reuters is starting video news feeds that are directed specifically to machine analysis. Bulletins will be mined for meaning; for example, a report on a given company could be analysed for sentiment, which could feed into automated share traders. (I find this concept particularly insidious, as any automated trading can exaggerate the volatility of share markets, and this mechanism has potential to disrupt markets on flimsy bases.)
Walmart is currently working on a system to automatically all shopper movements in its stores, for use in marketing analysis.
Some (lead) police departments will have all officers video recording their full day by 2010.
Links: Blinkx.com; vquence.com.


Storage
Portable devices (phones in particular) will take up Terabyte storage, to the point where by 2025, all movies ever made (including Bollywood) could be stored on an iPod-like device. There will be a very steep curve in the takeup of storage over the next 5-10 years, to the point where people will stop deleting anything: they _can_ keep and index everything, and the [labour] cost of deletion will be too high to be worthwhile.
[Yet according to US-based industry analysts Forrester (see here), the cost of storage equipment and management software currently consumes 10% of IT budgets, and will increase 4% this year. So I would say storage maintenance will remain a significant issue, deletions or no.]

Voice
Stress analysers are working their way into call centres. Already two UK insurers are using voice analysis specifically as a component of their risk assessment. Bruce said that Kishkish, a company providing add-ons for Skype, already has a consumer-quality "lie detector" available [which I would suggest is of limited merit]. The US army has just started handing out portable voice analysers that can operate with a variety of Iraqi languages, with an 80% (?) success rate (after baselining each subject with 20 neutral questions).
Links: Kishkish

Human Networking
As LinkedIn is the most successful business networking tool, so other tools will be developed that will automate networking processes, to the point where a social network map could be built simply by analysis of the contents of email boxes. Bruce suggested such tools could be a boon for marketing in areas such as recruitment.
But that's only the beginning. Bruce depicts a point (in a process which has already started) where machines will automatically mine the web for all data about a given person.
There's more. Spock.com combines machine mining with a wiki - to enable people to add their own comments to a store of information about a particular person. There was a suggestion this will greatly fuel reputation management as an industry.
Links: Grokker.com; zoominfo.com; wink.com, spock.com, LinkedIn

Image Processing
Image recognition married with social processing. There could be great value in marrying computerised image recognition with social processessing - say, having a few people validate an image [or identity-related information]. A California university was mentioned which claims 95% certainty on _partial_ face recognition.
Image recognition is such that by 2009, a service could be provided that tags with location any photo that includes landmarks as significant as a building [? - methinks this is optimistic].
Links: polarrose.com

Spacial Media

Disruption is in the chips. At $1 per GPS chip, GPS (and RFID) to become ubiquitious. One billion GPS users by 2016. Further, with expansion totools such as Google Earth, there will be 3D views to every street, to the point where everything now achievable in Second Life could be done in a Google Streetview type environment. This has great applications not only academic (eg museums), but also commercial and intelligence-related. By 2018, asset audits to become obsolete.
Links: Second Life, Google Earth

Sustainability Monitoring
Project Vulcan: carbon emission monitoring, spacially navigated, to the discrete level of buildings, daily updated. Carbon labelling at an asset/product level.
Links: Project Vulcan


Now, much of this I would take as nigh-on achievable with current technology, but so much would rely on the degree of uptake. I discussed this briefly with Bruce afterwards, and he acknowledged that a lot of what he talked about was likely to be taken up in a leadership context, ie by relatively few, key organisations. My point to him was that we never could have foreseen the mobile phone phenomenon: a) how ubiquitous they would become in a relatively short time; and b) how such a mass consumer uptake would fuel a host of technology spurts that might not otherwise have happened without such a gross device commodification.

Bruce also expressed to me a high degree of optimism that the takeup of sustainability and carbon-monitoring technology would be pretty much led by consumer (/voter) demand. My feeling, however, remains that although by now people are dead keen to see something done about global warming, a) they are unlikely to take too much action themselves; b) they (as a mass) may well baulk when gross personal costs or lifestyle changes are at stake.

My imagination was most definitely fired by Bruce's prognostications - as it has been each time I've heard him. But although he can give good outlines of what could be done with technology (with little to no leap from today's capabilities), how it actually pans out is, I feel, still up for grabs.

Tuesday, March 04, 2008

Create your own hi-tech life

Interesting article today about Jon Oxer, a bloke who's been up-teching his life - as a hobby. You know, electronicising his house, car, body, etc.

There's nothing too alarming about this: doubtless there's a lot of techy blokes (and a few women) around the world doing this.

Yet it's interesting to read about some of the things he's done.

He chipped his arm (injecting a rice-grain-sized gadget) to allow him to open the door without a key. He has electronically enabled all [accessible] doors and windows such that he can secure the whole house on command.

He even gets notified of every mail delivery - mostly junk, though, I'd imagine.

Imagine going to bed and forgetting to close/set/turn off something. Then being able to press a few buttons to sort it out without getting up. Presuming the remote is simple enough to remember how to do everything you need.

To his credit, he maintains a principle of getting everything to work invisibly - ie, wireless everywhere, and control mechanisms effectively hidden.

Throughout these projects, it's hard to escape the thought that when something goes wrong, it's going to create such inconvenience. Such as not being able to get in the house. Strongly advised to ensure there are reality override backups for everything - it's easy forget something like that, to dire consequence.

A couple more salient points emerge from the article. First, for someone sufficiently enthused, imagination is just about the only limit to what can be achieved, given the componentry available.

Second: some of the innovations will doubtless prove an expensive waste of time. But out of all this will probably emerge a number of ideas that doubtless have legs. Fertile ground for venture capital.


Interesting to keep abreast of his initiatives - and the bloke's got a blog, so it is possible to keep up. And he says the phone's been ringing off the hook since the article came out.

Thursday, August 17, 2006

Tech: Just what does IBM do?

It might surprise you to know that IBM is the largest I.T. company in the world - by revenue. Global sales in 2005: IBM: US$91 billion; Microsoft: about $40 billion.

Of course, it used to be the hardware giant: in the 1960s and 70s, IBM simply defined the mainframe computers that once formed the backbone of large enterprises. Then the mainframe market was slowly crippled over the 1980s and 90s by the rise of micro computers: they were effectively a victim of the success of... IBM-format PCs. And the Dells of this world have demonstrated that the PC sector is a dangerously low-margin market.

Over time, IBM turned to technical/business services and software. Some of that software is its own technology, like DB2 (the new release is discussed here: it includes native XML support and autonomic memory management).

But IBM has also been buying up software companies left right and centre, specifically to give it a full vertical business software offering.  In effect, it is seeking to lock in large enterprises by providing a full range of software/services across business needs.  This is a common trend in the larger software companies which is why, for example, Microsoft and Oracle have equally been on the takeover warpath for several years.


Like Apple, IBM deserves credit for successfully re-inventing themselves more than once. Whereas they once exploited their hegemonic dominance in the mainframe computer market to extract monopolistic profits (hence the epithet Incapacitating Business for Megabucks), they now operate much more competitively across a range of markets, leveraging off their brand name rather than their monolithic presence - something Microsoft is taking note of, as its own dominant position is being eroded by Linux, Open Office, and other open source offerings.


2009 update: Q2 2009 revenue reports at $23.6 billion - that's just the one quarter.  This is made up of:
  • 57% services - made up of technical services (39%) and business services (18%)
  • 22% software
  • 17% hardware
The remainder is revenue from financing businesses to buy their upscale hardware.  More complete press reports on IBM's second quarter financial figures can be found here and here.