Wednesday, January 30, 2008

Populitism: Socialistic Fantasy


It was a shame we had to “skim” Computer Lib/Dream Machines as it was by far the most “fun” of the readings assigned, but my most intriguing theory of the media deals with Nelson and his theories of Xanadu , so I guess I’ll get to talk about it a little anyway. Moulthrop talks a lot about Nelson’s notion of populitism: a combination of ‘populism’ and ‘elite’. He says,

“a ‘populite’ culture might mark the first step toward the realization of . . . ‘a game of perfect information’ where all have equal access to the world of data, and where ‘given equal competence (no longer in the acquisition of knowledge, but in its production), what extra performativity depends on in the final analysis is ‘imagination,’ which allows one either to make a new move or change the rules of the game.” (695)

What’s great about this theory is its accuracy in predicting the selling point of a Xanadu like hypertext web like the world wide web, while simultaneously forgetting that we’re talking about human beings here. There is no possibility for a Xanadu as Nelson envisioned it because power will always be held by a few, and creating the illusion that it’s not just leads to corruption. That’s why none of the communist experiments have worked. They said everything was owned by the all the people, it’s just that certain people held those goods in trust for the others. A socialist internet falls along the same lines.

Nelson’s diatribe on the Computer Priesthood (304) seems oddly archaic as more people know and use computers than ever before, but there is still an elevated status imparted to someone who knows the jargon of a computer and not just how to use it. Corporations are very aware of the power held in the web, and aren’t going to offer control of it to everybody. And while it’s great fun to dream of a utopia where our interconnectivity creates a social rule and unity . . . blah blah blah, the truth is power is held by a few. It always has been, always will be, and no matter how many revolutions we go through, the power is merely exchanged from one priesthood to another, while the populace continues on in blissful disillusionment.

On another note, despite Nelson being a little disillusioned in his utopian dreams, I really enjoyed his predictions of future technology. At least he was trying to move beyond theory into practicality.

I can't think of a witty title...

When looking for a particular media theory to write about for this week, I was struck with Jean Baudrillard's opening line of "Requiem for the Media": 

"There is no theory of the media" (278)

Baudrillard is writing in the context of McLuhan and Enzensberger's ideas about media and theory, which makes this statement somewhat puzzling. Buadrillard perhaps offers evidence for this claim by explaining the relationship between the media and ideology. Maybe Baudrillard meant that the theories of others such as McLuhan are flawed; in the process of trying to prove that theory doesn't exist, I think he ultimately creates his own theory (even if it is a sort of "anti-theory").

 For example, he revises McLuhan's idea that the medium is the message by explaining that "the essential Medium is the Model. What is mediatized is not what comes off the daily press, our of the tube, or on the radio: it is what is reinterpreted by the sign form, articulated into models, and administered by the code" (283). While I think that Baudrillard successfully adapts McLuhan's and Enzensberger's ideas for his own view, I had trouble understanding some of his ideas and actually wrote "...none of this is really making any sense to me..." in my notes.


I found the last article, "The End of Books" by Robert Coover to be the most interesting (probably because it was short). When Coover talks about students using computers to write documents as leading to the end of books, I thought about my personal dependence on my laptop. When I was an undergraduate I had to have a hardcopy of everything I read so that I could write on the document. Electronic texts bothered me because I felt distanced from them. As a graduate student (who has to pay for printing) I quickly adapted to reading articles online and now I prefer it. I like being able to carry only my computer with me and have all my notes for my classes on my jump drive or desktop. I enjoy the quickness of copy-and-paste as opposed to physically organizing hard copies of documents in order to see connections. I think that perhaps Coover meant that through hyperfiction (allowing the reader to get something different out of a text each time they encounter it) paper books will become obsolete, but I took it to mean that people simply won't have any use for physical books (except as a novelty): electronic copies will take their place. 

As a side note...
I also enjoyed what Coover said about writing students being ultra-conservative and his description of how some students freaked out at the idea of not having any structure/guidelines for their writing and attempted to create hyper versions of traditional structures. I am fully aware that I am that type of student, as I am further discovering while contemplating my New Media project...

Marx and media?

I found Jean Baudrillard’s article, “Requiem for the Media,” very interesting. Especially since this blog is suppose to focus on the theory of media and this article begins by stating “There is no theory of the media (278).” Baudrillard’s elaboration on how media is a capitalist enterprise (ah Marx) with an imbalance in the power and a lack of irresponsibility on the side of the receiver was both frightening and true. His description of mass media (TV and radio) as “non-communication” was something that I had never considered before. In reality, our society has become more based on a media power hierarchy and those who control or influence the media are definitely the “haves” and those of us left listening are the “have-not's.” There is no communication left, we are told something and expected to believe (or not believe) in the information, the “have-not's” have no way of entering the conversation. In proper communication there should be a sharing of knowledge, a bouncing around of ideas, not a limited and controlled release of information. But the question arises, “How do we create an equal playing field in new media?” I guess it could be argued that computer technologies and the internet have begun to equalize things, but my blog doesn’t have the same impact as a presidential candidate’s or movie star’s. Even if my blog is better written and more intelligent than Ms. Spear’s rants, hers are still quoted in various other forms of media and mine are left strictly to the pleasure of classmates and friends. Even though I have the means of entering the conversation now, I lack the position to do so.

What does my rant mean? Well, Baudrillard felt that media needed to be interactive, “a press edited, distributed, and worked by its own readers (286),” in order to “unfreeze” our current “blocked” situation. But isn’t his description basically what blogs are today? And unless the Today show starts quoting my blogs the way they quote the crazed celebrities I don’t think that anything has been unblocked. We just have millions of “underground” conversations that lack the power to compete with the information put forth by the “have’s.” How many people today believe what a celebrity tells them strictly because they are famous? An example, Pam Anderson is PETA's spokesperson, what does she know about animals or hunting or farming or any of PETA's issues? Probably nothing, or at most she knows what PETA's big wig (who by the way has diabetes and needs to use insulin, gee I wonder where they get that from) told her, but she has media power so people listen and believe her.

Baudrillard article is very much a warning, just like in the past when is was the people who owned the machines that had the power, now it is the people who control the media. They now have the power to control what we know, how we think, and what we have access too. It almost seems as though media is teaching us not to think, because it will think for us. I think there are sci-fi movies that begin this way.

Enter the Medium

I would be interested to know how much McLuhan influenced Burgess’ A Clockwork Orange (1963). The Ludovico treatment depicted within the book and the Kubrick adaptation (1971) seem to have been directly influenced by McLuhan—not to mention Alex’s horrorshow, devotchka-like manipulation of nadstat.

The Ludovico treatment:

Returning to Mcluhan, several of his theories intrigued me (and were echoed in later readings). McLuhan seems to want to assign morals to inanimate objects, which seemed odd (and disagreeable) because inanimate objects are by nature amoral or completely without moral leanings because they are incapable of deciding how they are used—people have to use them for good or evil.

This seems to be the main problem with cyborg theory in general: it wants to assign moral judgment to inanimate objects. The Terminator movies were fun but we lack the ability to ever create a computer that is self-aware. We could make a computer that might pass the Turing test, which means that a computer has been programmed to emulate human moral judgment, but the machine would be nothing more than a reflection of the moral underpinnings of its programmer. Perhaps the closest we will ever come to creating machines that have moral judgment is in the creation of a cybernetic organism—merging a human being with a machine.

Cypoultry:

Again, however, the machine itself is still an amoral object, but it is directly controlled by its human brain—essentially taking the place of the programmer. Thus the medium is still not the message; rather, it is a very direct carrier of the message.

Dr. Ludovico with Cypoultry:

Tuesday, January 29, 2008

Media Theory

I found J. David Bolter’s theories in “Seeing and Writing” the most intriguing because of the way he integrates the connections between old and new media we have been discussing in class and reading about in other selections with the idea that new media must be something truly unique. In the introduction to “Seeing and Writing” Nick Montfort wrote that “an understanding of new media can only come when truly novel elements can be divided from those which are imitative, using scrutiny of the sort Bolter applies” (679).

At first, I was confused reading Bolter because Montfort’s introduction led me to believe that new media was something truly unique and most of Bolter’s examples dealt with strong similarities between new media and the media that came before like evolutions in printing and typography. In some cases, technological advances even allowed people to go back and revive once rejected formats in a form of technological nostalgia as Morris did (681). So, what is new about new media? Surly it is not solely that it allows nonprofessionals to use professional tools without developing professional skills, creating the appearance of a literacy crisis of sorts.

Of course, it was not. Bolter’s point is not that new media must be completely new. After all, there must be some link to connect the new and the old so users can adapt and relate to it. As Montfort explained in his introduction, “one might approach new media from adjoining, better-understood territory” (679).

Bolter’s point is that in all its familiarity, new media is completely unique in some way whether it is enlarging a window size, scrolling through a window, or forging links between online documents (684). These small but often revolutionary changes alter the way documents are read, what people expect out of their documents, and even the way people think and learn. All of this seems fairly obvious after reading Bolter’s article, but what I think is really interesting is how easy it is to see the similarities between the new and the old that it is hard to articulate what is really unique about new media other than that it is accessed by a computer, etc. rather than in a book or other print source.

Wednesday, January 23, 2008

New Media History Lessons...maybe?

I should be hesitant to start with a cliché, but I’ve never had much in the way of shame. History is written by the winners. It’s clear that the dominant media will always reflect the values and impressions of the winning group. That said, this idea is now cliché because it’s essentially become part of our cultural consciousness. We now know that we all know that the winners determine the “facts” of history. Which leaves us no room for any plausible deniability and I think the popularity of books like Lies My Teacher Told Me the Disinformation series simply emphasize this shared anxiety.

I’ll make my point now (and as briefly as possible). This shift in public ideas about the creation of history is not based in some ethical reasoning; the fear that we’re marginalizing in history some horribly oppressed and slaughtered peoples (at least not exclusively). It’s because we’ve become gluts for information, particularly about subjects we’re interested in and in which we style ourselves as experts or those subjects that our shared culture agrees that we should be familiar with. We live with the anxiety that some day we will say something we’ve taken to be for fact for our whole lives have someone else tell us we’re wrong and that they can prove it. The ease with which we can now access information means that most people now have the ability to become “self-styled” experts. Electronic media as a method of recording history has allowed people to become participants in this cultural exchange. Consider that for the average information finder Wikipedia is the first stop for random history needs and how much users are allowed to interact with that information. In our current media we’re no longer satisfied with one perspective and we often seek out that conflict of ideas as somehow being more “real” or at least more in line with our experiences.

History and the Media

Traditionally, history has been recorded by the winners or at least society’s majority. Technology is changing that. While textbooks and scholarly documents may still promote the official view of what happened, the media is allowing individuals to share their perspectives through blogs, YouTube, webpages, etc. This allows everyone to record history not just scholars. It also allows the public to access many different types of history ranging from world, national, city, and personal in ways that were not widely available to the general public until new media gained popularity. People are also able not only to record history but also analyze and contribute to historical events that have already been recorded much like Burroughs suggests doing with text in order to see different themes and meanings in existing work. These changes make history feel much more alive to the average person. Almost like the happenings Kaprow described where audiences are no longer refined to their seats and polite applauses but to interact with the performers and help create the performance. Another consequence is that history recorded in the media reflects a much greater portion of life than the staunch researched version does. This will be immensely helpful to future generations as they learn about the past, unless Paula’s fears come true and some horrible catastrophe destroys our digital world or some device that allows us to access it. With any luck, we can rely on the fact that media and history act reciprocally. The media records history, but history also helps create the media. Wiener mentioned that after WWII, he decided a new type of scientist was required who was engaged with the consequences of scientific work. In his attempt to be the type of scientist he desired, he began working towards cybernetics and helped create the new media we know today which has allowed history to become more interactive than ever before. History and the media are connected. Neither can progress the way we have come to expect it to without the other.

History a "fittness" test?

From the beginning of human existance man has found a way, through technology, to leave his mark on this planet. From the earliest cave paintings to the scrolls that the books of the bible were orginally written on, and one thing has remained constant -- man's view that knowledge should be shared and passed on. The mediums of the past have been something that will survive for an extended period of time (unless something horrible and usually man made happens i.e. the fire of Alexandria). By passing on information in this way, men have left clues not by what they wrote or drew on the medium but through the medium itself, creating and adding to the context of the records. The types of dyes or paints used to create the cavepaintings in Africa left clues to what plants and animals were available to make them colors from, the areas where the paintings were found are clues to early human migration (and therefore animal migration) patterns, and in areas that are now arid (I am thinking of the white lady cave painting in Nambia speicifically) it provides clues to the evolution of the landscape. Through out history similar questions can be answered through the examination of the media from various time frames. But what about today's media, what if (BIG WHAT IF COMING!) all the crazies who say some castrophe is coming that will destroy much of what we know? Most of today's media is digital and I don't think it will survive an meteor hitting the planet. Information on CD, DVD, VHS (remember those?), and stored on our hard drives needs a human to access it with another piece of technology. Unlike the cavepaintings, scrolls, books, and other media of the past which only needed the human eye and possibly some understanding of the language and/or alphabet, today's media needs a device that can interrpret the code that we translated our information into. Years of knowledge could be lost if the understanding of a few peices of technology disappear. Scary.
So What got me thinking about this? Well you can thank Allan Kaprow and his article "Happenings' in the New York Scene." After reading this I started thinking that there is probably no way to acurately record a "Happening" even if you film it there is no way to capture every aspect of the happening since it has so many variables and "artists" involved. If you can't record a "happenings" and study it (here comes the scientists in me brace yourself) is it of any value? The knowledge is there for 1 breif moment in time and cannot be recreated or built upon, so it important? OK now here really comes the scientist.... If we apply the theory of Natural Selection to english (or as some of you may know it as "survivial of the fittest") Then we must look at a text or work and evaluate its "fittness." Fittness in science refers to how many surviving offspring an organism has that also reproduce and pass on genes, and so on. So if an organism's genes survive for 10,000 generation it was a very "fit" organism. Well lets get back to english. Plato, Marx, Artistole, have very "fit" texts and theories we have been learning from them for generations, surviving cavepaintings and scrolls are also "fit" because we are continuing to learn from them. But Happenings? In 50 years will be still be learning from them or will they disappear for our knowledge? If we can't record them and continue to learn from them I would say they have a fairly low "fittness" and therefore are doomed possibly with several other forms of media that we presently use.
*end rant*

History and Media

The types of media that are used to record things from the past cannot help but add to the document and help create some of its context. It is not always possible to know the specific context behind something from the past and by utilizing a certain medium for conservation, attributes of that medium become part of the history it is capturing. For example, ancient texts that were written on papyrus and uncovered by scientists hundreds of years later, even if they do not reference the media directly, say something about the types of media available at the time of its production. 
I think that new media lends itself exceptionally well to this idea of the relationship between history and media, because by its nature new media breathes new life into old art. By utilizing modern methods of communication and expression, historical artifacts (writings, art work, ideas, etc) can not only be available for modern consumption, but also by employing current media, historical objects are brought back into current conversations. New media adds some relevance to whatever it is recording because it is useful today as a medium. 

Doublethink

'Who controls the past,' ran the Party slogan, 'controls the future: who controls the present controls the past.' And yet the past, though of its nature alterable, never had been altered. Whatever was true now was true from everlasting to everlasting. It was quite simple. All that was needed was an unending series of victories over your own memory. 'Reality control', they called it: in Newspeak, 'doublethink'. (George Orwell 1984)

History is not just a record of the past; it is a means by which a society and individuals in that society define themselves. In an oral society, people define themselves through mnemonic devices that transfer the overarching ideas of a metanarrative, and not so much the details or the specifics of that truth. Whoever controlled the present could simply alter the details of the narrative. When writing became the predominant media, the metanarrative became more substantive. Facts were recorded, and the records were filed and archived. In order to control history, leaders of the present had to either completely eradicate this material and make new documents or edit and omit existing documents from the historical records: think Stalin, Hitler, Mao. It was all a very lengthy and recognizable process that usually accompanied the discrediting and/or killing of the old guardians of this information, but they were never able to destroy it all and the metanarrative continued below the grid.

With postmodernism and New Media, we’ve taken all the work out of changing history and have stopped caring about metanarratives of truth and facts of the past altogether. The overload of information available on the internet has allowed everyone to create their own histories, an activity promoted in academic institutions. To further complicate this issue, new media is impermanent, allowing the “unending series of victories over your own memory” Orwell feared. It may seem that this hyper-availability of information is a freeing experience allowing the individual to control their own destiny, make their own reality, interpret events by their own experiences, etc., but I would argue an opposing viewpoint. By neglecting experiential reality for interpretative reality imposed by virtual information, the individual is at the mercy of a temporal existence more fleeting than life itself, both unaffected and ineffectual in the grand scheme of things. Meanwhile, the metanarrative continues on without them, and they are ruled by the people who do think about the big picture and their place in it.

Consider Kaprow’s article “’Happenings’ in the New York Scene”. The artists are living a temporal existence, no longer seeking to pass along hard-learned truths to present and future societies. Instead, they create one time events that attempt to convey an emotion or a response; it doesn’t matter what that response is, as long as it’s a response: something to make the viewers feel some connection to the humanity and the world around them, and all the time perpetuating the problem by not creating something sustainable that the viewer can experience in anyway outside of the interpretation of their own memory because they can't go back and view/experience it again. When we go to a gallery and view a painting, watch an old movie, read some piece of the canon, it’s not just an appreciation of our interpretation that we experience; it’s a connection to the past: to the millennia old metanarrative of mankind: a connection we’re giving up in the name of self-worship while simultaneously becoming increasingly more depressed and self-destructive.

Break out the Soma! Er . . . I mean Prozac!

Brandon’s view of Wiener:

As I was reading today’s assignments, Wiener’s article in particular caught my attention (okay, the Burroughs’ cut-up theory has interested me for a long time, too). I was interested to learn of Wiener’s theory of ethics in cybernetic development and that this theory spawned from his work in the iron triangle as a reaction to the development of the atomic bomb. Until I had a good look at Wiener, I was under the impression that the only people who seemed to be really interrogating cybernetics were Hollywood scriptwriters (a scary state of affairs). In addition to learning that ethics were considered from an early time in the history of computer-human interaction, I was shortly thereafter floored by the possibilities of what Wiener describes in his chapter. On the one hand, we have a culture that ignores the ethics of AI development, which inevitably ends with machine gods that devour humans like in the films Metropolis or the Matrix. And on the other hand, we preprogram computers with ethical responses (like in managing waste valves as Wiener describes) in which case we have a much happier outcome. Think, for example, of the most immoral, godless institution ever devised by humans—insurance companies—and add a computer that can elicit predetermined moral responses. Currently, an insurance computer determines what the path of least cost/greatest profit is and bases its decision accordingly, but imagine if the computer was actually trained to value human life in a sense that it would chose a costlier course of action to insure saving lives. The computer would follow its predetermined moral path methodically and without any deviation, and it would absolutely function with a complete disregard for the profiteering that insurance employees have proven incapable of doing. Immediately, the question arises, “who gets to preprogram the computer’s morals?” I would like to suggest that as long as the programming is done publicly, its use enforced by law, and absolutely no lobbyists are allowed within 500 feet of the programmers, the customer service qualities of our insurance company’s would improve.

Cut-up/Redux: Wiener’s view of Brandon (note, must be read while techno music is playing):

On the one hand, we have a culture that ignores the ethics of AI development, which inevitably ends with machine gods that devour humans like in the films Metropolis or the Matrix. Immediately, the question arises, “who gets to preprogram the computer’s morals?” Until I had a good look at Wiener, I was under the impression that the only people who seemed to be really interrogating cybernetics were Hollywood scriptwriters (a scary state of affairs). Think, for example, of the most immoral, godless institution ever devised by humans—insurance companies—and add a computer that can elicit predetermined moral responses. The computer would follow its predetermined moral path methodically and without any deviation, and it would absolutely function with a complete disregard for the profiteering that insurance employees have proven incapable of doing. I was interested to learn of Wiener’s theory of ethics in cybernetic development and that this theory spawned from his work in the iron triangle as a reaction to the development of the atomic bomb. Currently, an insurance computer determines what the path of least cost/greatest profit is and bases its decision accordingly, but imagine if the computer was actually trained to value human life in a sense that it would chose a costlier course of action to insure saving lives. As I was reading today’s assignments, Wiener’s article in particular caught my attention (okay, the Burroughs’ cut-up theory has interested me for a long time, too). I would like to suggest that as long as the programming is done publicly, its use enforced by law, and absolutely no lobbyists are allowed within 500 feet of the programmers, the customer service qualities of our insurance company’s would improve. In addition to learning that ethics were considered from an early time in the history of computer-human interaction, I was shortly thereafter floored by the possibilities of what Wiener describes in his chapter. And on the other hand, we preprogram computers with ethical responses (like in managing waste valves as Wiener describes) in which case we have a much happier outcome.

Wednesday, January 16, 2008

"Gods of our own machines"

I believe that in 50 years we will rely on computers even more than we do in our present day. I also think that computers will be even more compatible with our thought processes. As Vannevar Bush explains in "As We May Think", the human mind operates by association. He goes on to propose that some day computers will function the same way. I think that this idea has not only come to pass, but an be projected even further into the future. Eventually we will be able to successfully create the labyrinth that Jorge Luis Borges references and, as Janet Murray stated, become "gods of our own machines" (11). 

As time goes by technology becomes more and more integrated into our daily processes; therefore it is natural to think that the way we think about technology will become even more natural. As Murray explains, new media grows like  potato roots: expanding inward and outward at the same time without a beginning or an end. I think that as new media continue to grow, they will grow in such a way that it will become even more invaluable to us. 

Another idea of Bush that I believe is not only relevant to our technology today, but will continue to be pertinent in 50 years is the function of the "memex". Bush explains how "the memex has been envisioned as a means of turning an information explosion into a knowledge explosion" and goes on to state that "this remains one of the defining dreams of new media" (35).  Just as the advent of writing allowed us to free our minds of having to remember so much and enabled us to expand our thinking to more complex issues, I believe that within the next 50 years computers will continue to relieve that burden. 

As We Will Transmit

As our culture feels the full repercussions of algorithmic thought, a crisis of identity must intercede. The next fifty years will see a turn away from the joyous celebration of the network, the tipping point, and the crowdsource. Instead, the ubiquity of data and thought will call for an inertial twin of solitude and forgetfulness. The symphony of the networked self will demand a recognition of pauses that give distinction to melodies and individuality to notes. Rather than the walnut-sized cameras that graced the headbands of Vannevar Bush's future information worker, the thinker of tomorrow will discipline her or himself with the blankness of solitude. The places and moments that go unrecorded, untagged, and unlinked will define and outline the layeredness of the flickering histories that defined the early 21st century.

Concepts like nation, people, and being will gain distinctness through a recognition of embodied experience. Cogito ergo sum will be replaced by the unshared dance of experience. The abstraction (pulling apart) of the eye will be replaced by the concreteness of skin. As we pull away from authorized interfaces, the perverseness of primary contact will regain a currency that transcends the teletouch. The prophylactic and analgesic purpose of networks will find its dialectic in the revelation of the embodied.

Down the Rabbit Hole

As exciting as new media is, we’re really just in the infancy of what it’s capable of. When Guttenberg invented the printing press in 1439, no one would have suspected how the mass production of reading materials, the lowering of production costs, and the increase in literate population would affect the world. Sure there were some aristocrats that correctly feared that a literate population would jeopardize their positions of power, but no one could have foreseen how drastically that new technology would change the way we think. And how rapidly a literate proletariat population would improve the quality of life throughout the world, while simultaneously dooming future generations to unimaginable atrocities: a deteriorating environment, weapons of mass destruction, attention deficit disorder, etc.

So, in regards to the question, “how will we think in 50 years?” I would have to say, there’s no way to truthfully gauge the after-effects of new media: beneficial and harmful alike. However, since this is an assignment, I must pose some answer so I’m going to look at some emerging technologies and use my imagination. So, I’m going to dive down the rabbit hole here and say, I think we’ll think exactly the way we’re told to think.

Scientists around the globe are conducting experiments to determine how compatible the human mind and computers are. Right now, the human-computer interface involves physical interaction with a keyboard and mouse—and now with the advent of touch interface, hand gestures, but computers are being developed to use a new type of interface: thoughts. This technology is called brain-computer interface (BCI) or sometimes Direct Neural Interface (DNI); it’s a lot like Blueray and HDVD; we’ll have to see which format wins out. At any rate, the interface can take electronic signals from the brain and interpret them in the computer, and it can also send electronic signals to the brain and have direct effects on the subject. One neat application is that quadriplegics can lead more independent lives :


But who knows where this technology will lead? We’re a long way from sticking ourselves into virtual worlds and being used as batteries for evil machines, but 50 years is a long time. Am I paranoid? Maybe, maybe not . . .


Convergence Point: Moving Towards Augmented Reality?

How will we think in 50 years? Much like Bush thought in 1945 the process of organizing and accessing data (which has already reached speeds that he could not have anticipated) will only continue to increase in speed and efficiency. And in 50 years we will have grown to expect this from all types of technology. Additionally the networks that tie both time and space together will continue to occupy us more and more. Our thinking processes will revolve around this interconnectedness and speed.

Like Jennie mentioned before we will see disciplines as being even more interconnected and more intimately tied to computer technologies and programming. I also believe that this interconnectedness will spill over into the technology itself as well as our social patterns. The physical aspects of technology will become increasingly standardized so as to make both physical connections and wireless networking far easier. These physical changes will only reinforce the themes of connectedness that will permeate our through process.

Like Yu Tsun of Borges short story what we initially thought of as chaos and randomness will be challenged by our own intellect. We will not be able to accept things that don’t make sense and as a result our ability to rationalize and forge connections and patterns will increase. We already feel this urge (religion and science both combat the anxiety we feel in the presence of randomness) but the reality of the world in 50 years and our ability to access and organize information will only increase the mental dexterity with which we draw connections.

It’s this ability to forge connections and the speed with which we access this information that combined with the current trends in social networking sites that could potentially form a “global village” in the truest sense of the word and I can’t help but think that some of the more “tribal” attitudes that McLuhan worried about might very well become the reality.

I can only say that what precedes this statement is a guess, and a fairly poor one at that based on the reality of technology as I understand it now. My hopes for the future (one perhaps a bit more than 50 years into the future) runs more along the possibilities of ubiquitous computing and augmented reality. I would have to say that it’s my hope that how we will think in the future (again, perhaps a bit more distant) hinges on our abilities to integrate the digital and the physical worlds until then simply both become the natural course of reality as we know it.

As the world connects

When I first began thinking about this assignment I really didn't think that thinking would change all that much. After all, this technological rat race has to slow down at some point right? But as all good grad students do I thought more about the topic and also considered my mother's stories about being in college. She lived in the library and research was a time consuming and tedious process, even applying to college was a process and a half! Writing papers on a typewriter! I would never have finished a single essay with all the spelling and grammar mistakes I usually have make. Computers and the Internet have made information so readily available, and I can't imagine people 50, 30, or even 10 years ago fully understanding the extent to which knowledge would be so easily transferred. Not that research is not a time consuming process in 2008, but with the added help of the Internet sources and topics and be identified much faster and peer review can happen with a simple e-mail attachment and a click of the mouse. Not to mention that in some classes paper is a thing of the past! Especially at the the graduate level, some professors only require an electronic copy of an assignment. Humans have proved long ago that the only limit to what we can make is the limit of our imaginations, and with each new advancement in technology our imaginations are provide with more fuel.

In the next 50 years I think we will look back at cable and DSL Internet like many of us consider dial up now, and the Internet of the future will be far more interactive. Instant messaging will no longer be dominated by text, but video or better yet 3D transmissions of ourselves will be used to communicate through some sort of portable device (similar to the I phone only much more versatile). Cameras, phones, computers, PDAs, video games, and every other tech. device will combined in to one hand held digital device that will have service everywhere! Even in the middle of no-where SD which apparently is invisible to all cell phone providers now (otherwise know as Mclaughlin, my home town). People will be even more consumed with convenience, and having everything at their fingertips. An "Office" will take on a different meaning, rather than a building it will probably be an online space where people from all over the world will have their meeting with c0-workers they have never physically seen. College students will no longer be limited to taking classes at one university, because just like the online offices, online classrooms will connect universities from different cities, states, and countries. The opportunities for student to study under the top researchers in their field will no longer depend on if they can get accepted into the proper university and fields will become much more united.

Computers will no longer be limited by how fast a person can talk or type, but programs will be developed to read thoughts and the software will be able to take down information at the speed of thought. Brainstorming will forever take on a whole new meaning. As all of this takes place the present chaos that the technological world already exists in will only increase. As people learn to do less for themselves and the fundamental skills that are needed or writing, reading, and learning become more convenient there will be a breakdown in society. You can call me an "anti-technologist" but I believe as technology advances we will see a correlating break down in the functioning ability of our society. Skills 50 years ago that were considered necessity are no longer as important (handwriting, cooking, cleaning, knowing how to find a library book, how to grow a garden, etc). Technology has created a society completely dependant on technology, and I believe that it will be our downfall. Because I think in 50 years the way people will think will be "If its not convenient why do/learn it?"

If electric flying monkeys could dream:

In 50 years, I think that people will think in much the same way that they think today—and thought 50 years before today. I suspect this may be a naughty thing to say in this class, but I’m not sure that new media changes the way we think so much as it becomes a closer reflection of how we already think. New mediaists (new mediaites?), post humanists, and cyborg theorists hypothesize that technology advances and we, taking the next step in evolution, attempt to mimic (even deify) the machines that are capable of executing such complex algorithms with inhuman speed. When Al Gore invented the internet, it infested our minds to the point that people shortened their attention spans and came to rely on hypertext-based thinking to reflect (.com-envy) one of the seven wonders of the modern world. I realize that I’m arguing over the chicken or egg coming first, but I think that people have always had nearly incoherent ideas that jump instantly from subject to subject. Refer to an old class favorite, Plato’s Phaedrus, for an excellent example of hypertext thought. Socrates and Phaedrus begin by discussing love, then move to rhetoric, the soul, divine inspiration, and even art. The dialogue, and Socrates’ thought processes, hardly plays out in a simple linear fashion as the “new” media of the day would suggest. Hypertext today certainly does seem to mimic our thought processes better than a standard written book might, but that hardly means that is changing the way we think. In fifty years there will be computers with hypertext links like today, but they will jump from link to link without the prompting of the reader/viewer. Instead, these AIs will “think” for itself by determining for itself what utterly useless google fact it wants to jump to next. It will move from link to link in an insolvable labyrinth with seemingly endless futures to choose from. Will this really be a completely new way of thinking, or will computer scientists merely have managed to mimic the average high school student’s daily internet surfing habits? I’d love to continue but I’ve run out of blogging space. See you all in class…

Tuesday, January 15, 2008

How Will We Think in 50 Years

It is difficult to know anything for sure, but after reading the essays assigned for today and thinking about how thinking has already changed due to technology, I suspect society’s thinking will change in two major ways. I think society will see academic disciplines and careers as more interrelated in the future than it often sees them today, and I think individuals’ thought processes will slowly undergo noticeable changes.

Lev Manovich mentions in his introduction “New Media from Borges to HTML” that new media has helped create and strengthen relationships between disciples like computer programming and art. Randy Pausch and those honoring him made similar connections in “Dying 47-Year-Old Professor Gives Exuberant ‘Last Lecture.’” I foresee computer programming becoming linked to all disciplines as each discipline begins to become more involved in technological developments for teaching or other purposes. I also believe that these connections will not only exist between computer programming and other disciplines but between all disciplines to some degree as new media will make it easier for teachers to implement theoretical approaches like teaching or writing across the curriculum. I have noticed a tremendous difference in the teaching materials available to me the last two or three years as sites like YouTube and others have gained popularity and become readily available. When I was student teaching, I often wanted to make connections to other disciplines like history, the sciences, etc. but I often lacked the resources to effectively do so. Today, all sorts of resources are only a click away. As teachers and professionals become more aware of these programs and use them more often, future students and workers will no doubt begin to see the world less as a place divided by diverse interests and fields and more as a place connected by collaboration and interdisciplinarity.

Individuals’ thought processes will change because of these connections between fields or disciplines but also because of the way they will process information. Jorge Luis Borges explores ideas of time and how alternate endings are possible if different variables are taken into consideration in his story “The Garden of Forking Paths.” The ideas Borges mentions have huge consequences for composition theory and its abstract concept of truth or truths. What is true if the story is always changing depending on the day, time, reader, etc.?

On a more concrete level, technology has already changed the way scholars conduct research. While technology has made information more accessible and research easier, it has also made it more complicated. I remember one of my college teachers telling my classmates and me about a study she had read a few years ago about the difference between how researchers use traditional versus technological research methods. The traditional researches tended to follow a much more linear path whereas technological researchers often found themselves exploring multiple paths at once, becoming overwhelmed with information and much more indecisive in terms of topic and focus. Like in “The Garden of Forking Paths” individuals will need to change the way they think in order to understand the potential as well as how to overcome the challenges accompanying new media.

Wednesday, January 9, 2008

Second!

First Post

First!