Showing posts with label future. Show all posts
Showing posts with label future. Show all posts

Wednesday, December 22, 2010

Future Sydney 4: Gehry's crumpled UTS building

As part of a continuing series on Sydney's future, here's an image of the planned UTS business school building, designed by architect Frank Gehry.  More details and more pictures can be found here - including the back facade, comprised of "large, angled sheets of glass".

This is the second startling building plan I've seen released by UTS, the University of Technology, Sydney.  The first, an engineering faculty, can be seen here.

The other visions of Sydney's future are rather less tangible, more costly, and politically difficult:
  • a bold proposal to create a large plaza in front of the Town Hall (by razing a full city block!)
  • the opening up of Circular Quay with the removal of the Cahill Expressway (with further distant visions leading from that post).


By way of contrast, there are some visions of a Sydney that never will be: alternatives for the Opera House (here and here), an opera theatre 'appendage' to the Opera House, and an alternative harbour bridge.

Thursday, September 03, 2009

iPhone and the future of personal devices

The future of personal devices is iPhone?

Apple's iPhone made it to the cover of Time magazine in 2007, the year it was released.  Now it has made the cover of New Scientist.  Why?

Because it's in the process of filfilling a vision I - and many others - had long ago.  That vision was for a ubiquitous device that would meet all one's digital needs - anywhere.

That shouldn't have been too hard - in theory.  I once had a PDA (personal digital assistant, a pocket-sized computer) that ran on Windows CE.  You beaut, I thought at the time, it's Windows-based and actively supported by Microsoft, so it will be a significant platform into the future, there'll be plenty of software for it, it will do everything.

That PDA only partially realised the vision.  It held music, photos, videos, spreadsheets, documents... and could connect wirelessly to the internet.  But as a general device, PDAs never grabbed the mass market's imagination in the same way that PCs, mobile phones and the internet did.

What happened?


First, a market overview.

Worldwide, there are already more mobile phone services than landlines (2005 figures, for example, were 2 billion vs 1.2 billion).

Further, there are are already more mobile phones being sold than any other devices (according to the above reference, 2005 sales were 830m mobiles, 210m desktop/laptop computers, 100m game consoles, 90m digital cameras).

Even earlier - 2004 - sales of Smartphones (integrated phone/PDA functionality) had overtaken standalone PDAs.  As of 2009, they constitute one in every seven mobile phones sold.

Back in 2004, as far as operating systems went, Windows CE was market leader at 48%, well ahead of Palm's 30% and 20% for RIM's Blackberry.

The picture is rather different now.  Latest worldwide figures (Gartner, Q2 09:
  • Symbian 51% (Nokia, Motorola, Sony/Ericsson, and others)
  • RIM (Blackberry) 18.7%
  • Apple 13%
  • Microsoft 9%
  • Android (Google's new platform) 2%
  • Palm <1%
(The bulk of the rest  - 5% odd - is Linux-based)


Ultimately, PDAs were superseded by connected devices, driven by that consumer product of choice, the mobile phone.  That's not the end of the story, but ubiquity in market penetration is gradually leading to ubiquity in functionality, and Apple is leading the charge.  The iPhone may not be the market leader, but the breadth and volume of applications is world-beating - and that phenomenon is illustrated impressively by the New Scientist article.

The other part of the equation is the loading up with communication capabilities - especially location awareness, which has sparked a surprisingly large and imaginative range of applications.

Microsoft did have a vision for its operating system which encompassed both PDAs and smartphones, but execution failed.  Although they pushed it quite strongly through their developer community, their market share is inexorably declining because they simply never caught fire with the wider public.  That especially is where Apple shines, generating momentum that fosters further innovation.

On the downside, Apple is prone to restrictions (to hardware, software, and connectivity enhancements) that are aimed at protecting its turf, including brand image, in one way or another.  That has been a downfall in the past, and a caveat that may yet unseat their current ride to glory.

The original vision stands largely fulfilled, in actuality or near-term capability.  Yet for me there remains an annoying gap in data entry.  How to transfer your digital world into the device, especially when mobile.  Fitted microphone and camera can go only part of the way.  In my original PDA, the issue was partly addressed with a fold-out keyboard that the device could be plugged into.  Still, typing as a paradigm leaves a lot to be desired.  Waiting for the next leap forwards...

Thursday, August 27, 2009

SETI, Open source, and the socialisation of productivity

What does SETI have to do with Microsoft's furrowed brow?

We all know the Search for Extra-Terrestrial Intelligence, whereby the universe is scanned for signals throughout the electromagnetic spectrum which can be interpreted as originating with intelligent life. Some of us have run SETI@home: you download a screensaver, which runs in the background, borrowing your unused computer time to run a parcel of number crunching for SETI. Everybody wins: only your idle computer time is used, and it can have some wider community benefit - you may even be responsible for the first discovery of extraterrestrial life.

That was the first distributed grid computing project to gain widespread publicity. But the software is now available to turn any general project requiring major computer time into a socialised project. The Herald recently ran an article on Australian use of such software: specifically, BOINC, The Berkeley Open Infrastructure for Network Computing. The article said over 32,000 Australians were currently running BOINC projects, out of 1.7 million people worldwide.

The scope is tremendous, not just for general scientific research, but also for any community-sector project that may not otherwise have the resources to get off the ground.

For the moment, here's a list of projects you may wish to take part in. Those are all scientific research, mainly in biology, physics and maths, but there's also a World Community Grid, which is specifically aimed at humanitarian projects.

As for Microsoft, the other side of community computing is software: open source, to be specific: generally an open source project is contributed to by many, with no profit-oriented copyright - and generally available for free. Open Office may be the most famous - a direct competitors to Microsoft's Office suite. And as a method of developing software that is freely available to all, it has gained acceptance in most areas of my professional focus, business intelligence. Apart from the well-known mySQL database, there are also open source tools available for most related areas. As well as database and BI software, there's also ETL, data profiling, and so on.

Over time, you should expect prices to tumble in all types of software directly affected by open source initiatives. Yes, the likes of Microsoft can expect some buffering from these forces due to brand-name strength. But yes too, Microsoft is worried enough that they are already working on alternative revenue streams, including jumping into the cloud. Those alternatives shouldn't see a collapse of capitalism any time soon, but the long-term trend can only benefit the public, particularly those who might not otherwise be able to afford such computer resources, particularly in the developing world.

In a wider sense, distributed computing and open source are simply harbingers of a globalisation and socialisation of productivity, for the benefit of all.

Thursday, August 20, 2009

Spaceship Earth-II: the future of Earth's life

"The earth is going to die in 500 million years!" exclaimed my eight-year-old today. And I had to illustrate to her how this is well beyond the span of our existence. Sort of a deanthropocentric exercise in reverse.

But what of it? Fundamentally, we don't like to think that there's nothing left of us - ever. But does that need to be the case? Yes, the sun is growing hotter, but we have hundreds of millions of years of technological advancement before the Earth becomes uninhabitable. And think where we've come in just one hundred years.

Last week, I was talking through a thought experiment with Mark on this topic.

Space is prohibitively large; commuting is not really an option. Even at the speed of light, the nearest star system to our own, Alpha Centauri, would take four years' travel. And it's questionable whether there's anything habitable there. It's a binary (plus) system, and the gravitational flux of two nearby suns may not foster stability.

Further, our bodies evolved in gravity, and it's not clear we'd survive for extended periods in minimal gravity environments.

In Rendezvous With Rama, Arthur C Clarke posited a mammoth cylindrical body 50 kms long, with habitation on the inside. That's an overwhelming construction endeavour. I think there are easier options.


My suggestion is that to travel beyond the Solar System would take far more massive an environment than we could possibly build ourselves. It would be simpler to grab an existing body, and power that away somehow. As Mark pointed out, this is the Space: 1999 scenario, a science fiction series where the moon was torn away from Earth.

Possibilities include using something large from the asteroid belt, a moon from Jupiter or Saturn (such as Ganymede), or maybe something far out, such as that erstwhile planet Pluto.

Issues include heat, propulsion, gravity, retention of atmosphere, and other life-sustaining variables. By the time it's worthwhile thinking about it, I'd say we'd have the technology to allow us a few options.

This is the stuff of science fiction, certainly; plenty of options have already been canvassed in that milieu. Burrowing underground would provide sturdy shelter, although digging enough habitable space would be Herculean. Other options include domes on the surface - or terraforming.


Ah, terraforming. Rather what happened to our own planet. Microbial life has built up our current atmosphere and environment; we're just the evolutionary outcomes that could adjust to it. It took hundreds of millions of years to develop, but I think it's reasonable to anticipate we'll be able to engineer biological solutions that work faster.

However, out beyond the easy reaches of the sun, everything freezes. There would need to be both sufficient gravity to hold an atmosphere (or to be able to continually regenerate it), and heat sources sufficient to prevent that freezing. The latter would be most feasible through nuclear fusion sources - we haven't succeeded at this yet, but I can see no reason it won't come. It's what the sun uses.

Gravity is a matter of using a large enough body. Life on Earth is, of course, evolved for our specific gravity, and much more research is needed to understand how or whether current life forms could adapt to lower gravity, or whether we'd need to engineer alterations that would allow various forms to survive in a somewhat different environment.

Because we would want to take with us as much of the existing variety of life as we could. This could involve storing samples at the DNA level, for later development/unpacking using either technological or substitute development (incubation) methods. In any case, plants and animal life should be considered an essential part of our environment - our being - and taking that with us would not be at issue. Bacteria and viruses too, surprisingly enough. Bacteria are our microbial engineers, a fundamental tool of life. Viruses have helped us become what we are today, though infiltrating our germ lines, they have imparted in us the resilince - and functionality - that we possess today.

The Earth's variety of life evolved specifically because the amount of solar radiation both protects us from other stellar sources, and generates mutation by occasionally knocking around with DNA. Outside Earth's orbit, mutation would happen at a different rate, which we would have to account for. Lesser rates would not be an issue: we are now at the point of engineering our environment to overcome the 'need' for adaptive outcomes of mutation. Greater rates of mutation would necessitate careful screening to optimise outcomes.

Yet that begs the question: outside the Earth's specific environmental womb, would it be more beneficial to engineer adaption in ourselves, so that future generations can make the move more readily? The biggest barrier is ourselves: the fact that we are rather wedded to our current form, no matter how ill-adapted to space journeying. I suspect we would be more willing to put extra effort into optimising our environment, than to force evolutionary change on our own grandchildren.

I have great optimism that we will survive in the long run. Even if, to paraphrase Steve Kilbey, we end up as digital memory*.


None of this is a substitute for getting our own planet in order. But if we can succeed in that, we'll probably be well placed to survive past the use-by date of our planet.


*The Church: Fog, (1992 B-side to Ripple)
It hurts to think that in a hundred years
We'll all just be microfiche
Our names and the names of our songs
Cataloged and filed away


- however, compared to the fate of most of our ancestors, I'd be happy to survive in digital form.

Wednesday, March 11, 2009

McCabe I.T. prognoses 1: the cloud

It's hard making intelligent predictions. Science fiction's successes have been notably sporadic, with the odd fax and video player overwhelmed by flying cars and time machines. But everyone was caught on the hop by home computers, mobile phones and the internet, so the would-bes are trying to make it up with high-impact but outlandish speculations.

Bruce McCabe is a researcher and analyst (his company is called S2 Intelligence) who makes his living predicting the future course of technology for corporate clients who want to keep on top of broad trends. In particular, he is wont to point out technological change that will be "disruptive" to business - that is, major developments will bring about changes to business models, negatively impacting those who haven't kept up, and providing advantage to those who are ahead of the game.

That latter must be where he ekes out his niche: competitive advantage is a key issue for corporations, and technology is the biggest vector for change.


Thus to McCabe's latest review, dated January 2009. It covers briefs on 34 aspects of technology; although this is ultimately an admixture of intelligence, knowledge and speculation, credit should be given to McCabe for his length of service in this field. His work must be worthwhile, since he is still consulting and presenting to conferences at least five years after I first saw him.


Yet the first topic - cloud computing - is a fraught topic: its meaning has been somewhat abused, often coming to refer to any outsourced I.T. services, where it more accurately refers to computing services (particularly storage and processor power) that are leased from a third party (via the internet), and abstracted in terms of size (and so very scalable) and physical location. It is chiefly the scalability and on-demand nature of such a service that brings business benefits over locating and managing one's own equipment.

McCabe visited Salesforce.com, whose success in this field may encourage people to overstate the degree of adoption of cloud computing. McCabe: "in the past five years not a single Salesforce.com customer interviewed by S2 has expressed anything other than strong positive outcomes. That outcome is unique."

It is a fair comment that: "this leadership is rapidly moving the goalposts for Microsoft, SAP, Oracle and every other provider of business software." He goes further: "A new world of software development is opening up. It is not a wholesale displacement of the old one... 'In the cloud' software development will, however, be strongly associated with rapid, disruptive, innovation by businesses".

Although this may be the way the world eventually understands cloud computing, McCabe effectively conflates a number of different trends:
- cloud computing - scalable leasing of computing power;
- free and open source software - including, for example, Google's offerings of business software that directly competes with Microsoft;
- outsourcing in general;
- the emergence of software development services, especially from India.

As with all attempts at outsourcing, if one's I.T. capabilities and needs are not managed effectively, it matters not whether they are located in-house or god-knows-where. And it remains that outsourcing in whatever form it takes makes management exponentially harder; the hazards are also far greater. We've all read or experienced these outsourcing efforts: incredible disruption to business when the switch was flicked; equivalent headcounts hired as consultants down the track; and sometimes a complete volte face to bring services back in the fold.


Nonetheless, it must be acknowledged that the trends described above (cloud computing plus) are going to figure big and are going to disrupt traditional business models. The greatest business benefit comes where services are inherently commodifiable and scalable in the first place, and thus lend themselves well to such abstraction.

Wednesday, December 10, 2008

Future tech: the possibilities in mapping

In the course of a presentation on mapping technologies yesterday, quite a few interesting applications came up.

Mapping technologies freely available today include Google Earth, Google Maps, and Microsoft's Virtual Earth (available to the consumer as Live Search Maps service).

Live Search Maps is a cut-down equivalent to Google Maps - and of less value in Australia thus far. But beyond a simple map service, these technologies have more meaning behind the scenes - in what can be done with the underlying technologies. The mapping engines of Google Maps and Virtual Earth can be used in a variety of contexts, some rather distant from the core consumer services provided. For example, I was told of Virtual Earth being used to navigate ultra-high resolution images of human eyes. In effect, the technology has been transferred to a very useful medical application. By extention, the possibilities are endless.

Under the hood, the technologies simply constitute mechanisms to navigate through a physical landscape of any dimensions or locations. No reason this can't include (with the appropriate data sets) maps of the moon, Mars, the known universe, right on down to any physical form that has been represented in sufficient detail. To this can be added third-party data for a variety of purposes. This is already being done to plot specific sets of geographical points, but it can also include representations of weather information, 3D rendered objects, older photos or created/imagined photos. You could thus superimpose on the present a planned future (and so see a full context for this new wing for Sydney's Museum of Contemporary Art), or even an imagined future. You could superimpose the past. It could be quite valuable for analysing history or archaeology. You could also look at a putative past, such as a different plan for the Sydney Opera House.



In a broader sense, this is a demonstration that technologies that have emerged over the past five to ten years are likely to have a much more profound impact on us than some of the comparatively trivial applications available today. If it is surprising that the free distribution (of much of these technologies) is viable in a business sense - and much of it has proven so - then it may also surprise us what we will be able to do with little effort and no cost in the future.

Tuesday, September 23, 2008

Future cloud computing Googlified

Google's official blog discussed cloud computing 10 years hence - far more eloquently than I did recently.

In a nutshell, most computing power will come from web-based services (effectively, Everything-as-a-Service), and our own computing resources will be mere devices that hang off the cloud. Not quite like the dumb terminals on mainframes of yore, though. They rightly see continued exponential growth in the three mainstays of power: processors, storage, and networking (the essential plumbing). Our devices will certainly be powerful - but not a shade on cloud resources. (I see the power in local devices being chiefly used to drive our interaction with the cloud, in the long run.)

They see a great plethora of devices hooked up, many of them far smaller and more specialised in application than our typical laptops/desktops.

They also dare to speculate on the smarts - intelligence - built into "the cloud". Read it all here.

Thursday, May 15, 2008

Bruce McCabe on Future Technology


Every couple of years I have the pleasure to hear a talk by Bruce McCabe, and each time he presents a riveting vision of our technological future.

McCabe is a Sydney-based industry analyst who, through his company S2 Intelligence, gives vision to a number of large organisations, including all Australian governments at State and Federal level.

The following increasingly terse narrative is drawn from my notes of his speech, in which he canvassed a number of technological developments that are already present, and will be 'disruptive' [to businesses] in the near to medium term. My comments (or elaborations) are in square brackets. Throughout, I noted that most of these developments have very significant privacy implications. Any errors or omissions, blame me.

The sections below are Video, Storage, Voice, Human Networking, Image Processing, Spacial Media, and Sustainability Monitoring. I leave it to you to surf the sites mentioned; I have not yet had time to go through them all.

Video
Although currently mainly consumer-driven, video over the internet will be increasingly geared to business needs; it will account for 98% of internet traffic in a year or two. Technology is already available that can index the content of video clips, and so that content of videos will become searchable. Use of video will become more structured [and commodified] to the point where they can be treated similarly to text-based objects, including cutting, pasting, and hyperlinking to the middle of a clip. Moreover, there will be automated [computer-, not human-directed] analysis of clips - news in particular. Reuters is starting video news feeds that are directed specifically to machine analysis. Bulletins will be mined for meaning; for example, a report on a given company could be analysed for sentiment, which could feed into automated share traders. (I find this concept particularly insidious, as any automated trading can exaggerate the volatility of share markets, and this mechanism has potential to disrupt markets on flimsy bases.)
Walmart is currently working on a system to automatically all shopper movements in its stores, for use in marketing analysis.
Some (lead) police departments will have all officers video recording their full day by 2010.
Links: Blinkx.com; vquence.com.


Storage
Portable devices (phones in particular) will take up Terabyte storage, to the point where by 2025, all movies ever made (including Bollywood) could be stored on an iPod-like device. There will be a very steep curve in the takeup of storage over the next 5-10 years, to the point where people will stop deleting anything: they _can_ keep and index everything, and the [labour] cost of deletion will be too high to be worthwhile.
[Yet according to US-based industry analysts Forrester (see here), the cost of storage equipment and management software currently consumes 10% of IT budgets, and will increase 4% this year. So I would say storage maintenance will remain a significant issue, deletions or no.]

Voice
Stress analysers are working their way into call centres. Already two UK insurers are using voice analysis specifically as a component of their risk assessment. Bruce said that Kishkish, a company providing add-ons for Skype, already has a consumer-quality "lie detector" available [which I would suggest is of limited merit]. The US army has just started handing out portable voice analysers that can operate with a variety of Iraqi languages, with an 80% (?) success rate (after baselining each subject with 20 neutral questions).
Links: Kishkish

Human Networking
As LinkedIn is the most successful business networking tool, so other tools will be developed that will automate networking processes, to the point where a social network map could be built simply by analysis of the contents of email boxes. Bruce suggested such tools could be a boon for marketing in areas such as recruitment.
But that's only the beginning. Bruce depicts a point (in a process which has already started) where machines will automatically mine the web for all data about a given person.
There's more. Spock.com combines machine mining with a wiki - to enable people to add their own comments to a store of information about a particular person. There was a suggestion this will greatly fuel reputation management as an industry.
Links: Grokker.com; zoominfo.com; wink.com, spock.com, LinkedIn

Image Processing
Image recognition married with social processing. There could be great value in marrying computerised image recognition with social processessing - say, having a few people validate an image [or identity-related information]. A California university was mentioned which claims 95% certainty on _partial_ face recognition.
Image recognition is such that by 2009, a service could be provided that tags with location any photo that includes landmarks as significant as a building [? - methinks this is optimistic].
Links: polarrose.com

Spacial Media

Disruption is in the chips. At $1 per GPS chip, GPS (and RFID) to become ubiquitious. One billion GPS users by 2016. Further, with expansion totools such as Google Earth, there will be 3D views to every street, to the point where everything now achievable in Second Life could be done in a Google Streetview type environment. This has great applications not only academic (eg museums), but also commercial and intelligence-related. By 2018, asset audits to become obsolete.
Links: Second Life, Google Earth

Sustainability Monitoring
Project Vulcan: carbon emission monitoring, spacially navigated, to the discrete level of buildings, daily updated. Carbon labelling at an asset/product level.
Links: Project Vulcan


Now, much of this I would take as nigh-on achievable with current technology, but so much would rely on the degree of uptake. I discussed this briefly with Bruce afterwards, and he acknowledged that a lot of what he talked about was likely to be taken up in a leadership context, ie by relatively few, key organisations. My point to him was that we never could have foreseen the mobile phone phenomenon: a) how ubiquitous they would become in a relatively short time; and b) how such a mass consumer uptake would fuel a host of technology spurts that might not otherwise have happened without such a gross device commodification.

Bruce also expressed to me a high degree of optimism that the takeup of sustainability and carbon-monitoring technology would be pretty much led by consumer (/voter) demand. My feeling, however, remains that although by now people are dead keen to see something done about global warming, a) they are unlikely to take too much action themselves; b) they (as a mass) may well baulk when gross personal costs or lifestyle changes are at stake.

My imagination was most definitely fired by Bruce's prognostications - as it has been each time I've heard him. But although he can give good outlines of what could be done with technology (with little to no leap from today's capabilities), how it actually pans out is, I feel, still up for grabs.

Thursday, February 14, 2008

The past impacts the future







“we are the bearers of many blessings from our ancestors; therefore we must also be the bearer of their burdens as well.”




- something that people too often forget when there is a responsibility to fulfill.

Wednesday, December 19, 2007

Human evolution is accelerating?

I take the opportunity to make brief mention here of a study from the International HapMap Project (their website here).

I've read rather different takes on their claim that human evolution is speeding up. I have in front of me one article from the New Scientist, and another from the Sydney Morning Herald (sourced from the Guardian).

They quote different authors of the same study, published in the Proceeedings of the National Academy of Sciences, whose gist is this acceleration.

The New Scientist version (which I would trust more) quotes 1800 genes (7% of the human genome) as having changed through natural selection in the past 50,000 years. Examples of changes given (variously by the reports) include resistance to colder climates and to some diseases, and lactose tolerance into adulthood - for some.

The Herald/Guardian report at least quotes some dissenting views, which tend to revolve around the notion that our manipulation of environment (including built environment and technology) have now sheltered us from the effects of natural selection.

I'm not an expert in genetics, but I remain skeptical, largely for the reason above, but also because it sounds to me (at this stage in my readings) so damn counterintuitive.

One of the authors, paleoanthropologist John Hawks, has at least one blog - possibly several - and he writes here on this study. Sounds like that's preliminary, with more to come.

I'm happy to be proven wrong. There is undoubtedly some philosophical/epistemological debate around this.

11-Aug-09 Update: I would note that recent readings suggest humans are not likely to be up for much further evolution - simply because we mould our own environment, and we are so numerous. Unless there is some global environmental disaster to cause a genetic 'bottleneck', any mutations will get lost in the mix, and will not in any case experience selection specifically for survival.