Why Reinventing the Internet Is Bad

I wrote this in reaction to the Pantograph article “Researchers mull scrapping Internet staring over.”

The professor leans into the microphone to further emphasize his point. “Yes, while the Bloomington-Normal works, but the current area has some serious deficiencies.”

Upon this statement, a Normalite in the crowd stands up at the comment and asks? “What are the problems with the community?”

“Well for instance, there are bottlenecks on streets when emergency vehicles use them. As you may know, a few seconds of delay crossing busy street can cost lives. In addition, many of our roads wind around or are end in cul-de-sacs, slowing or preventing the good flow of traffic, especially on the east side in the newer subdivisions. This prevents people from getting where they want to go quickly.”

The professor adds, “also we do not know who drives the cars on the road, criminals might steal someone’s car and masquerade as them. So you do never know who really is in that mini-van behind you. Not to mention those pedestrians and cyclists don’t need to identify themselves at all. The problem is that early settlers did not anticipate these changes in travel. In the past everyone knew each other in the town and relied on horse and foot travel to move people and goods around the city. Now we have automobiles, motorcycles, bikes, and even airplanes. Much of the infrastructure for these modes of travel was put in place ad-hoc as the need arose. Just look at where the airport is, it prevent growth on that side of Bloomington,” the scientist scrunched up his nose in disgust.

“I installed SimCity 4 and the Sims 3 on this laptop. My plan to create a side-by-side Bloomington-Normal and my team will work out the kinks in the current system. This new system will be more flexible, faster, and we will be able to track movement of goods and people better. In a few years, we will then overlay this new system and society will benefit as a result.”

Another person rose to ask a question, this time someone from Bloomington. “What if we don’t like it? Can we go back to the way it was?”

“But you WILL like it. After all, the best minds around are banding together for you benefit after all to improve today’s outmoded system.”

The current size of the internet community is ~1.6 billion people who have radically altered the original infrastructure of the internet and “bolted on” a plethora of technologies to what was technology that only the military and scientific community used. Now many decades later, some scientists think that there are problems with how the internet works. True, there are problems. Like any system created by man there where trade-offs; counties can filter the information their citizens see, corporations can deny the free flow of information through threats and lawsuits, and criminals can do business with relative safety. Welcome the real world.

At the base of the article, this new internet the writer talks about seem to undermines democracy since it takes both anonymity and free association to help add to the marketplace of ideas. Both of these will be undermined because all packets are tracked. Also, it seems they want priority routing of packets. This can give rise to a tiered internet since those with money or resources can make sure their packet move around it the quickest. Even with a new system, there is not to say that an new technology that reshapes how people live will fit well.

Humans never reinvent their societies on a whim, so why reinvent the internet? We did not bulldoze cities in Europe once the internal combustion engine was invented. Europeans built around it. What about electric, water treatment, or even cell phones? Do we destroy what society has made in the past just to incorporate new ideas? No, that is wasteful, we INTEGRATE. New ideas and technology came about and we fold them into the rest of our knowledge.

We do not throw away the past, nor should we throw away what the internet has become as it is a part of who we are as a society. Maybe some socialist scientists would like the build a better internet, feeling that they know better than the population at large as to how it should work. Of course, this is all in the guise of protecting ourselves from ourselves by creating something that will magically eliminate spam, porn, and illicit activities.

If the scientist’s idea is great, it will naturally be picked up by the population at large in time. Take texting, in the last five years it became ubiquitous. Many of my friends and colleagues could not understand this, calling it a fad. Now they do it themselves because it has uses apart from calling someone. This technology was bolted on the cell network, but you don’t hear wireless carriers complaining that they need to reinvent their networks to fully integrate it.

The internet should not be reinvented, only improved. It came in to being ad-hoc because of the needs of society, ethnic groups, companies, and governments. We designed a great system, if flawed in places, which allows those who access it virtually all of human knowledge.

Copyright and Obscurity

I came across this quote a few weeks ago when reading an article about piracy:

Obscurity is a far greater threat to authors and creative artists than piracy. - Tim O'Reilly 

It is interesting to note that most people’s creative side, like myself, will just remain in obscurity. Even if I keep this site up for 80 years, the text upon it will slowly disappear from the net when I am gone. It is a wonder why people fight so hard for copyright extensions since by it very definition, it slows the spread of their idea.

Now works stay locked up for nearly 100 years. Good if you made it big corporate entity, but for the vast majority of works, it stinks since it relegates these people to obscurity simply because there is value in gate keeping.

The gate keeping is the worst part because it locks up culture, preventing people from reading a work, making derivatives copies that build our culture, and sharing great pieces with their friends. Don’t get me wrong, I do like copyright because it may give people the incentive to create knowing that they can profit and continue to do what they love to do most, create. However, the time period should be shortened to give society a chance to build off it. I don’t know how long, but 10-20 years seems ample.

Shortened time period might give creators more incentive to continue to create art, and give people who have grown to love the work a chance to use their imagination on it. Once in the wild an idea becomes bigger than its creator, and to restrict ideas is a great injustice to society.

Intimacy Model of Social Networking Sites

< ! v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} -->

I was bored a few weeks ago and created this framework of a few social networking sites. They sites seem to be the next stage in the progression of how people interact on the Internet. It is just another thing people came up with the mimics how the real world works. This article is not the end all since it only it analyzes only three popular sites. However, like Google, they will become dominate in the US because of their critical mass.

No one site will have the lock on people’s time since we have alternate egos throughout the net. I expect people join these as their “Public” personas but you will find many on more targeted sites that interest them in a more personal ways to find people of like interests.




Like the real world, social site users interact with the people they are closest to in real life. Most spend 66 – 85% of their time communicating or looking at the profiles of the 4 to 11 people they are closest.

On these sites, people spend time sending small communications to one another that update each other’s pages or if they are online at the same time, they will engage in mini-applications such as quiz games or use in site chat programs. Traditionally, this communication has taken the place via phone calls, text messages, instant messages and face-to-face meetings.


Outside of the intimate group of contacts, people keep track of their casual acquaintances much like the real world. These contacts are people they know, but do not interact with on a daily basis such as former work mates, high school and college friends, and people they interact with in online and offline groups they engage in such as church groups or gamer clans.

While people have varying number of these people connected to their profiles, there seems to be an upper bound of 150 – 200 people in this category. After that point, the user cannot keep up with their social network.

If a user has a strong civic or brand relationship, these groups may also be in this category. Such as being a member of a Ford car club or softball team.


Groups or people in the group give the user’s profile color and uniqueness. Many sites have small add-ons that users allow users go give each other gives or compete in games and quizzes. People find these apps based on affinity to a brand or take part because they see it in another users’ profile. Users may also use these mini-applications to foster social activities or just to break the ice. Examples such as giving a friend a Budweiser or completing a Family Guy quiz that displays the number of points they received.


This is the entire social ecosystem of the site. Depending on what companies do, users may or may not venture into this space. For example, MySpace focuses on entertainment and users will often look around to see what their friend are into to find new music and shows to watch. Where as a site like Mixi is an invite only site that focuses on intimate relationships and does not have a strong public component that allows deep interconnections.

Site Differences


MySpace is focused more on media and entertainment. People use this site to find out what their friends participate in such as music or TV shows. It also has great flexibility and allows users to personalize their profiles and give it an individual look and feel. This ability makes the user focus more finding new connections rather than fostering their more intimate groups.


LinkedIn is seen as a more business like site that offers networking opportunities to find new business or meet acquaintances in a particular line of business. It nature fosters more causal relationships based on mutual interests much like traditional settings such as the Chamber of Commerce or trade associations.


With its rather plain shell, Facebook looks to foster communication between users. Unlike other sites, it is stripped down and simple to navigate offering a consistent look and feel for it users. The site is setup around small comities such as the year someone graduated from a high school or college to work groups. From there people branch out to other groups based on their preferences. It nature allows users to interact with their friends and colleagues easily and is set up so people can maintain their offline relationships.

ClickTale Critique

For my a midterm in one of my classes at ISU, ITK 367 – Designing the User Interface, I had to do some research on a topic. I read late last year about a report stating that the fold of the browser, this is the place that you need to start scrolling to read more content, did not matter. I tracked down this report and it has some interesting information. However, marketing speak takes over and they stretch the facts way too much. Sites that picked up this information parroted some of the faulty conclusions and I wanted to put this out as a counter to this information. Not to say that the study is invalid, but some of the conclusions simply are not substantiated fom the data.

Late last year, I read a number of news reports that talked about the fold a browser window having little effect on users when they go to web sites. For my paper, I did some checking to find out the source of the information. I found that a company called ClickTale has a three-part report on the effects the page fold has on users. The first report, “Unfolding the Fold,” details how users interact with web sites. Using 120,000 page views collect over a two-month span, they indicated that 91% of pages clicked had scrollbars and of the page what had a scroll bar, 76% of users scrolled to some extent. Of those with scrollbars, about one quarter of user reached the bottom of the page. Even for pages longer than 4,000 pixels, this scrolling process held true. This first report gives some indication that users do not see just the top of the page, but rather tend to look through screens when surfing.

The next two reports titled, “Scrolling Research Report V2.0 part 1 and 2” delve deeper into how users scrolled through pages. In these reports, the findings seem to show that users are much more likely to scroll down the page a little than all go all way to the bottom. Participation drops seems to drop rapidly after the user scrolls more than 500 pixels. However, the analysis seems to show that the percentage of people scrolling to the bottom of the page remands near a constant 20% no matter how long the page. The first part of the report showed in interesting breakdown of where the fold is. There were three major locations, 430, 600, and 800, that corresponded to the typical screen resolutions. The most common was, 600, equates to 1024×768. The actual fold varies slightly from machine to machine since users have different set ups including extra toolbars and programs to improve their surfing experience.

I feel that the data provides valuable insights on of users interact with web page, even if a number of their conclusions are seriously flawed. The major part that I took away is that it provides evidence that designers do not need to put everything on the top of the page. People seem to accept scrolling as a part of using web sites.

Some of the researcher’s analysis seems to be correct, but problems start showing up in the concussions based off this research. In the first part of, “Scrolling Research Report V2.0,” one of the conclusions is that web sites should maximize images while minimizing text because people tend to browse. Nowhere in the data does it show that people tend to scroll down the page more if there are more graphics or multimedia objects. We need an analysis between text and graphic heavy pages before making this statement. Another conclusion is that you should break the page layout into sections. Again, the data does not support this. The researcher noted that as people scrolled, viewership dropped exponentially or linearly depending on position. If breaking the page up into sections worked, the data might show plateaus where people paused between section breaks.

One assumption that I feel they over reach with is the fact that the footer is important. They state that one in five users scroll to the bottom of the page and spend 10 – 15 seconds at this point. Since the second most time spent on each a page is the bottom after the top is the bottom, one could make this assumption that having information at the bottom is a good idea. However, if one reads a page they might scroll ahead of where they are reading, this means that the bottom of the page is where they spend to catch up with their scrolling. Of course, people spend more time at the bottom because many sites have reference links and blog comment boxes there. In addition, if your click the browser back button to go back a page it takes time to do this action and to load the previous page. More research needs to happen before this assumption could be deemed valid.

Despite the major leaps of faith in their assertions, I do agree with their two major points. The first is that the top sections of the page are the most important parts of the page since nearly 100% of the users have the content on the screen. The other point that the page fold plays less of a roll than it did since at least three-fourths of users scroll down the page for more content. It would not surprise me that the biggest reason for this is the addition of the scroll wheel in mice. Even if these two statements are good, I think that the results stated through the research go too far. Coming from a for-profit corporation, this is not unexpected since the information from the report came from the same tools they try to sell. This also makes the information somewhat suspect since you do not know if the researchers did the work scientifically by trying to approximate site usage through commercial as well as educational and personal sites.