Skip navigation

Is Your Blog Ready for Web 3.0?

Web 3.0: When Web Sites Become Web Services by Read/Write/Web gives us more than a glimpse into the future of websites and blogs – Web 3.0 is being build here and now. Are you paying attention?

Today’s Web has terabytes of information available to humans, but hidden from computers. It is a paradox that information is stuck inside HTML pages, formatted in esoteric ways that are difficult for machines to process. The so called Web 3.0, which is likely to be a pre-cursor of the real semantic web, is going to change this. What we mean by ‘Web 3.0’ is that major web sites are going to be transformed into web services – and will effectively expose their information to the world.

The transformation will happen in one of two ways. Some web sites will follow the example of Amazon, del.icio.us and Flickr and will offer their information via a REST API. Others will try to keep their information proprietary, but it will be opened via mashups created using services like Dapper, Teqlo and Yahoo! Pipes. The net effect will be that unstructured information will give way to structured information – paving the road to more intelligent computing.

One of the biggest issues facing bloggers in a Web 3.0 world is site scraping, grabbing content from other sites and blogs to use on other websites and blogs. The scrapers are getting smarter, working within the fine lines of copyright law and fair use, but content servers are coming at the web hard and fast, grabbing content from anywhere and everywhere they can get it. It is getting harder to find original content, and to protect what is rightfully yours. The ignorant mentality of “if it’s on the web, it’s free for the taking” is winning ground.

How will this impact bloggers?

As websites, those static billboards representing businesses around the world, become more and more interactive, where does that leave bloggers? How can we compete?

Will blogging be swallowed up into websites, or will it find a face of its own? Or will it lead the way?

What do you think?

Related Articles


Site Search Tags: , , , , , , , , , , ,
Copyright Lorelle VanFossen, member of the 9Rules Network Feed on Lorelle on WordPress Subscribe

Member of the 9Rules Blogging Network

9 Comments

  1. Posted April 3, 2007 at 7:20 am | Permalink

    I think we need to stop with the vague, valueless buzzwords personally.

  2. Posted April 3, 2007 at 7:26 am | Permalink

    There are ways to protect your content from scrapers. Using AJAX for example fights most robots and scrapers. But that means the content will not be searchable by search engines.

    So there’s the rub.

    When legitimate sites like Google and Technorati employ the same or similar technologies to the unscrupulous – you have to take the good with the bad.

    What about creating our own API’s for our blogs for these legitimate sites? Allow Google, Yahoo, Technorati, etc access through a back door that can be tracked and monitored??

    Would anyone want to create a WordPress API plug-in ?? Would Google find this of benefit?

    Only time will tell how we relate in the future

  3. Posted April 3, 2007 at 8:14 am | Permalink

    I think there’s a certain number of bloggers out there who are simply using blogging as a clip-board for the web. Cutting and pasting original articles from other places into their blog and not acknowledging the source, differentiating it from their own content or even linking back. Robotic content scrapers are a problem, but they’re ones the search engines and technology will deal with (as we see them constantly get delisted on Google and Technorati).

    The organic content scrapers who are also bloggers are much harder to deal with.

  4. Posted April 3, 2007 at 8:20 am | Permalink

    I would love it if APIs provided more configuration options to content owners. Right now, my choices are: fully distribute my content according to the general terms of the API, or opt-out completely (and unfortunately, some major APIs don’t even provide opt-out options — Not cool). A nice middle ground providing more control to content authors would be nice. It would help us share more than we’re willing to do today.

    Same as authenticated feeds. Not all content put on the web is meant for public viewing… for instance, we may post private photos and other sensitive information targeted only to a small audience. But today’s RSS feeds don’t work with private content. It’s all or nothing: Make it totally public, or don’t have it on the feed at all.

  5. Posted April 3, 2007 at 9:57 am | Permalink

    I think its time that WordPress makers would start thinking about it.They can update the wordpress software to be web3.0

  6. Posted April 3, 2007 at 10:27 am | Permalink

    Eventually, Lawyers will figure out a way to make money suing scrapers. The real problem is when proper credit is not given to the source.
    Bloggers will adapt. If they stop the scrapers will not have anything any way.

  7. Posted April 4, 2007 at 12:02 am | Permalink

    More people read my blog than my company website so in effect my blog has become my company site

  8. Posted April 4, 2007 at 2:19 am | Permalink

    I think I saw a buzz word, I did, I did….

  9. Posted April 4, 2007 at 8:15 am | Permalink

    I dont know if I’m ready for 3.0 but I think its slowly evolving into 3.0 Everyone is picking up on the bandwagon and when someone does something people pick up on it and it evolves and migrates. So I think we’re all prepping for 3.0


3 Trackbacks/Pingbacks

  1. […] cierto, muy interesante este artículo aparecido en el blog de Lorelle.. ¿estamos […]

  2. […] Is Your Blog Ready for Web 3.0? […]

  3. […] cierto, muy interesante este artículo aparecido en el blog de Lorelle.. ¿estamos […]

Post a Comment to Frank

Required fields are marked *
*
*