Wednesday, November 17, 2010

Objects and the Internet



Object Oriented Programming has many benefits, the primary of which is object re-use.i Object reuse is the primary reason application programming languages such as Java and C/C++ dominate the web and platform development industry. Object reuse allows the evolution of data-structures and forms in a manner as to reduce the overall time required and increase the reliability of a given application whilst compartmentalizing and simplifying the development process, this in turn reduces development costs allowing businesses to achieve quantifiable results faster.

The Document Object Model (DOM)ii, is a language neutral interface that allows Javascript, XHTML and DHTML to function in a uniform manner regardless of platform or hosting infrastructure; they are expected to play well together anywhere.

The great possibilities for Java-script objects are those that have been developed and used by companies such as Amazon, they awarded a famous patent for “1-click” purchases.iii This 1-click patent and process relies heavily on OOP based ideas such as data-structures using javascript and server side objects. The 1-click method is also a very competitive trade secret currently licensed to Apple by amazon for use in iTunes.

The possibilities for object use within XHTML, XML and CSS, using the DOM and DHTML as standards are endless; AJAX automates the functions of form filling, object re-use and most importantly browser object and content manipulationiv. Thus the user experience may be improved and greater efficiencies may be realized by reducing the number of keystrokes required to make an on-line purchase.

Newer languages and platforms such as Ruby on Rails utilize AJAX to reduce the required time to deliver a web application from a few weeks to a few minutes.v The best example of this is google's Maps, this web-application is based soley on AJAX and Object-oriented concepts implemented in the browser accessing data in the cloud.

The possibilities for Objects on the Internet are endless; they will always be used in the manner in which they were designed, and will replicate and resemble their respective grand parents from OOP based langugates such as C/C++, JAVA and others. The difference is that they will facilitate the decentralization of processing requirements and the adoption of the use of remote computing resources via the ever present web browser by acting as the agent of communications for distributed applications.


iManish Vachharajani; Neil Vachharajani; David I. August; (Princeton University, 2003) A Comparison of Reuse in Object-oriented Programming and Structural Modeling Systems [Online] PDF Document, Available from: iberty.princeton.edu/Publications/tech03-01_oop.pdf (Accessed on November 18th 2010)
iiN.A. (W3C, January 19th 2005) The Document Object Model [Online] World Wide Web, Available from: http://www.w3.org/DOM/ (Accessed on November 18th 2010)
iiiHutcheon, Steven; (Sidney Morning Herald, May 23rd 2006) Kiwi Actor vs Amazon.com [Online] World Wide Web, Available from: http://www.smh.com.au/news/technology/kiwi-actor-v-amazoncom/2006/05/23/1148150224714.html (Accessed on November 18th 2010)
ivGarrett, Jesse James; (Adaptive Path, February 18th 2005) Ajax a new approach to web applications [Online] World Wide Web, Avaialble from: http://www.adaptivepath.com/ideas/essays/archives/000385.php (Accessed on November 18th 2010)
vHibbs, Curt; (O'REILLY; Onlamp.com, June 9th 2005) Ajax on Rails [Online] World Wide Web, Available from: http://onlamp.com/pub/a/onlamp/2005/06/09/rails_ajax.html (Accessed on November 18th 2010)

Tuesday, November 16, 2010

The new eCommerce paradigm


Traditional Worlkflows have been used by operations management and business for decades. Current standard business workflows have included e-mail, web queries, e-Commerce and transactions these exist in addition to traditional business methods such as process and operations management.[i]
Web based businesses use the INTERNET to conduct business, as such the only available method to  collect, verify and maintain customer relationships is using transaction based e-mail and web-forms to generate both direct sales and sales potentials, these functions form the basis of ERP, CRM, and ecommerce industries.

Data collection is the starting point for ecommerce based businesses, once collected data and information is gathered then this data and it’s use must be regulated to local laws and regulations. These include maintaining corporate policies that meet local regulations for any country of operations. The ISO series of policies (22307:2008 and 27002)[ii]


The webform will initiate the collection and authentication of information from both clients and in some cases suppliers and merchants. Amazon uses nothing but web forms to conduct its entire business with current net annual income valued at around 7.5 billion dollars.[iii]


With web forms businesses collect, manage and maintain any and all client and merchant related workflows; as such they have created a new paradigm for business, where information at a diminutive cost may be used to generate profit using traditional business on an exponential scale.
The new paradigm for business in the 21st century is based in the cloud and uses the internet as its engine.




References


[i] Cai, Ting; Gloor, Peter; Nog, Saurub; (Darmouth College, May 14th 1996) DartFlow a workflow management system on the web using Transposable agents [Online] PDF Document, Available from: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.57.3354&rep=rep1&type=pdf (Accessed November 13th 2010)  
[ii] N.A. (Wikipedia) ISO/IEC 27000 Series [Online] World Wide Web, Available from: http://en.wikipedia.org/wiki/ISO/IEC_27000-series (Accessed November 13th 2010
[iii] N.A. (Amazon Inc. October 21 2010) Amazon Announces Third Quarter Earninings [Online] World Wide Web, Available from: http://phx.corporate-ir.net/phoenix.zhtml?c=97664&p=irol-newsArticle&ID=1485834&highlight= (Accessed November 13th 2010)

Friday, November 5, 2010

Pedictions 1.2


The impact of increased availability of all human knowledge on the internet is only begging to be felt by modern society now.  Ray Kurtzweil refers to the advance and paradigm shift of the internet containing all human knowledge as the “Singularity”; it is defined as a point in time in the near future where machines will have greater intelligence then their human masters.[i]

The primary example of H.G. Wells oracle from the time machine can be seen in Wikipedia directly; this and the Web 2.0 standard where the machines may now understand the content and data presented via HTML 4.2 and HTML 5.0 standards; this results in the ability for anyone with an internet connection to have access to the “Sum of all Knowledge”; the result of this is the “Law of Accelerating Returns” [ii], where technologies not previously related to the internet maintain rapid returns and breakthroughs with respect to innovation.

Some brief examples of rapid technological evolution include computers and computational power; however areas such as engineering are seeing exponential returns with technologies such as DMLS which leverage the availability of computational power and information.[iii] This foundational technological revolution will impact both industrial and non-industrial product development and manufacturing.

The most apparent shift is now occurring with print media; where the access to all print media online has spawned the “ebook” reader; where companies such as Apple have capitalized with products like the “iPad”; where your magazines and news papers are no longer physical but digital. I recently saw Willim Gibson lecture on how the “Publishing Industry” must adapt to the new paradigm of point of use printing where your book purchase would have the vendor produce the hard or soft copy on site with modern printing machines[iv]; not only as a means of necessity but also as a means to reduce the global carbon footprint; in addition to this he also stated that modern science fiction authors have far too many variables in society to draw arc’s to be used as storytelling tools. For those of you unfamiliar with his work, he coined the term “Cyberspace”.

The rate of technological change will increase; following this change the rate of differentiation with respect to complexity and innovation in all facets of human life will also increase. These rates are driven by Moore’s Law, the Law of Exponential Returns and the availability of all human knowledge on the internet. These facts will allow traditional fields of science to evolve in an exponential manner; Medicine may finally conquer ageing and all related diseases with various projects such as the Methuselah Foundation[v] and Immortality Institute[vi]; culture will become more granular and fragmented to meet the individual desire for unique consumption. Even now within academia the traditional humanities are being supplanted by modern neuro-psychiatry;

These shifts are occurring because the  internet facilitates communication amongst an otherwise separate group of individuals; since human’s are far better at problem solving as groups when a group of a large enough size has access to all known information on a given subject the rate of change for that given subject becomes proportional to the group size.

The other change is that privacy will be nonexistent with the use of search engine data and social networks; current impacts are being resolved in various courts regarding the public disclosure of personal information[vii].

The impacts of these changes to myself and my children will be that we will live longer, healthier lives with fewer resources and a smaller more conscious ecological and technological footprint, our work will be different in that we will specialize in fields that are considered non-traditional; we will consume our media in a self directed fashion where we consume only the media we are interested in and we will have less privacy then our grandparents.

References

[i] Kurtzweil, R.; Viking Adult; The Singularity is Near (September 22, 2005) ISBN: 0670033847
[ii] Kurtzweil, R.; Viking Adult; The Singularity is Near (September 22, 2005) ISBN: 0670033847
[iii] N.A.; 3T RPD Ltd. Direct Metal Laser Sintering [Online] Video (26 Oct 2009)
Available from: http://www.youtube.com/watch?v=BZLGLzyMKn4&feature=related (Accessed November 5th 2010)
[iv] Gibson, William, Young, Norah; CBC Full Interview of William Gibson on Zero History [Online] MP3 File, Available From: http://www.cbc.ca/spark/2010/10/full-interview-william-gibson-on-zero-history/ (Accessed on November 5th 2010)
[v] N.A. Methsualah Project  About the Methuselah Project [Online] World Wide Web, Available From: http://www.mfoundation.org/index.php?pagename=mj_about_mission (Accessed on November 5th 2010)
[vi] N.A. Immortality Institue About the Imortality Institute [Online] World Wide Web, Available From: http://www.imminst.org/about (Accessed on November 5th 2010)
[vii] Wright, A. Marie; Kakalik, S. John; ACM The erosion of privacy [Online] PDF Document, Available from: http://portal.acm.org/citation.cfm?id=270913.270922 (Accessed on November 5th 2010)

Thursday, November 4, 2010

Programming the Internet 1.1


The deep web and its impact in relation to search engines, academia and commercial sites can be summarized by the presence of new commercial products aimed directly at the information management sector.

The issue of the deep web is that the various commercial bodies that contribute to the internet at large also maintain large repositories of competitive knowledge or repositories that have no external links or connections; this may be defined as a collection of “Trade Secrets” and “Information”; the primary example would be the recipe for fries at McDonalds.  You’ll see the nutritional make up of fries on McDonalds web site; you’d be hard pressed to find the details of they’re fabrication anywhere. These commercial bodies will engage in public or semi-public communication with their commercial parterres via the internet; but just as all radio communications is not for public consumption[i], neither are all web-servers.

Michal K. Bergman wrote:

“Traditional search engines create their indices by spidering or crawling surface Web pages. To be discovered, the page must be static and linked to other pages. Traditional search engines can not "see" or retrieve content in the deep Web — those pages do not exist until they are created dynamically as the result of a specific search. Because traditional search engine crawlers can not probe beneath the surface, the deep Web has heretofore been hidden.”[ii]

The only available method to search the “Deep Web” as stated by Bergman, is to conduct direct searches of non-linked sites utilizing cross referencing technology such as those cited in cyvellence’s studies[iii]; realistically the only true method that would not utilize educated guesses would involve the use of a network scanning utility such as nmap, nessus or metasploit to crawl the ARIN based address space for all values from 0.0.0.0 to 255.255.255.255, and index those findings in reference to the various engines used, or to leverage existing heat maps from CAIDA to establish the publically routable space as the primary scope[iv] for indexing. The major issue is that this approach has a number of legal barriers since in most countries “port scanning” constitutes a crime, and the technical challenges of creating a state full web-crawler for the entire space poses a real technical and fiduciary challenge.

The issue with search engines as stated in Bergman’s paper is “Quality” although there is a significant quantity of deep web sites; most of which are topic databases; in relation to content search engines are more concerned with quality and accuracy than quantity of search results. [v]

Academia are concerned primarily with quality; and accuracy of information and relevance of topics over quantity; as such various search engine providers are offering services that cater to the volume of knowledge contained within the traditional houses of excellence.[vi] These include publishers such as Prentice Hall, Springer, Deitel, IEEE, ACM, and other such academic organizations. 

Commercial entities desire competitive intelligence in addition to labour and resources; the nature of competitive intelligence is that it is based primarily on what is known about ones competition; next to unintentional disclosure, the volume of information available online via both traditional search engines and the deep web such as import and landing databases from customs would allow any corporate entity to determine a number of characteristics of their competition that would otherwise remain unknown. Current businesses exist to mine this volume of information and competitive intelligence is an emerging market where various companies offer services of this nature.[vii]

The effects of the deep web on future search engines will be that of content focus in a granular manner and content analysis; as the deep web becomes greater and contains more information of value; search engines will have to develop non-index based databases based on tertiary page characteristics and information such as meta data as referenced via intelligent page capture techniques such as machine learning[viii] and capture. This future although currently dark will be illuminated by the businesses that will gain the most from data mining, business intelligence and information capture.


References



[i] Sokol, Brett; Miami Times, Espionoge is in the Air [Online] World Wide Web, Available from: http://www.miaminewtimes.com/2001-02-08/news/espionage-is-in-the-air/ (Accessed on November 4th 2010)
[ii]  Bergman, K.; White Paper: The Deep Web: Surfacing Hidden Value [Online] World Wide Web, Available From: http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104 (Accessed on November 4th 2010)
[iii] Murray, Brian H.; Moore Alvin; Cyveillence Sizing the Internet A white paper [Online] PDF Document, Available from: http://www.cs.toronto.edu/~leehyun/papers/Sizing_the_Internet.pdf (Accessed on November 4th 2010)
[iv] N.A. ; The Cooperative Association for Internet Data Analysis; Measuring the Use of the IPv4 Space with Heat Maps [Online] World Wide Web; (Accessed on November 4th 2010)  Availble from: http://www.caida.org/research/traffic-analysis/arin-heatmaps/
[v]  Bergman, K.; White Paper: The Deep Web: Surfacing Hidden Value [Online] World Wide Web, Available From: http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104 (Accessed on November 4th 2010)
[vi] N.A. Google Inc. Google Scholar Search Engine [Online] World Wide Web, Available From; http://scholar.google.com (Accessed on November 4th 2010)
[vii] N.A. ImportGenius. About Import Genius [Online] World Wide Web, Available from; http://www.importgenius.com/about.html (Accessed on November 4th 2010)
[viii] Mitchell, Tom. M; CMU; July 2006, The Discipline of Machine Learning [Online] PDF Document, available from: http://www.cs.cmu.edu/~tom/pubs/MachineLearning.pdf (Accessed on November 4th 2010)