I have a memory of a W3 Consortium seminar in Sydney several years back. It discussed their efforts to put meaning into web content via The Semantic Web, using a concept/relationship mapping language called OWL, which uses RDF (a metadata descriptor) and XML syntax.
They intended to arrive at a structure that was universally navigable mechanistically (by computer), yet retain for each specialty area its own language/concepts. Yes, it was developed by academics, for academic applications.
This was before the concept of Web 2.0 was sufficiently popularised to gain a solid meaning. At the time, I believe they used the term Web 2.0 to describe their endeavour.
Times change, meanings change. The term Web 2.0 has been usurped for another purpose, and it looks like the W3 Consortium is now using Web 3.0 instead. At the current state of play, the simplest description I have seen of the evolution of the web (from Jean-Michel Texier via Peter Thomas) is as follows:
* Web 1.0 was for authors [ - to be read]
* Web 2.0 is for users [ - fosters interaction
* Web 3.0 is also for machines [ - fosters automation]
In effect, Web 3.0 should enable more rigorous discovery and collation of information from the far corners of the web. Something like what Google should be, if it had the full smarts. However, it would only work where web content authors added the background tags and capabilities - so it's more likely to be taken up for knowledge/information-building purposes, such as research, reference materials and databases. But this is a deceptively powerful paradigm, and the sky's the limit for assembling useful meaning. The current Google would look like a paper telephone directory... but by then, Google would have evolved to make full use of it. A fully referenced assembler of knowledge, rather than isolated lumps of unverified information.
PS If interested in Data Quality in a technical, database sense, see my latest tech post. (This one was intended for a generalised audience!)