johnstifter@ hotmail DoT COM
Mankind's destiny is not planetary it is cosmic.

I was an ancient navigator.
My mission was to collect data from the universe
and pass it on into the vibrations of existence
stars, planets, moons, meteors where there it would be embedded
for the millions of years it would take
until its presence reached an entity worthy enough
to be susceptible to its slight, but powerful influence.
What I did, I did out of reason, reason for understanding.
What I learned, I learned in order to be a pollinator of evolution.

My mission was endless.
I carried it out from within a spherical ship that soared through the distances of space.
This home, and shell, enslaved me to life; just as my mission was endless, so was my life.
I had been in the ship for so long that I could not recall even the most miniscule of memories preceding its launch or beginning.
There was nothing I remembered except what I saw and felt.

- SharkChild The Concomitant 2012

John Stifter is thinking about machines that hallucinate what you are visualizing and those machines share that with you,in a reinforcing feedback loop.

``Our minds are finite, and yet even in these circumstances of finitude we are surrounded by possibilities that are infinite,
and the purpose of human life is to grasp as much as we can out of the infinitude.''
---Alfred North Whitehead


The famous TRANSHUMAN God and Adam.

Peter Thiel

Prometheus Paradise
Prometheus Paradise

Transhuman Orion


Crackboy - Hilinner (Gesaffelstein Remix)

Grimes Oblivion Conrad Zeus Sinoia Caves - Evil Ball Ruddyp - Wanderers connectome Sammy Hagar Heavy Metal kreayshawn GameMaster noomi rapace noomi The telescoping of science and its unification leads to cultural and political tensions
that create the great danger of our time.
Technological progress intensifies the stuggle for influence in the final state, the final state of unification. John Stifter

Peter Weyland
Weyland Xbrush Central Art

Billions of years ago, and billions of light years away,
the material at the center of a galaxy collapsed towards
a super-massive black hole, and then intense magnetic fields
directed some of the energy of that gravitational collapse,
and some of the matter, back out in the form of tremendous jets
which illuminated lobes with the brilliance of a trillion suns.

Metallica // Never look back(DJ Tiesto mix)
Zbrush sculpture of Orion

Zbrush sculpture Ridley Scott Nick Bostrom

// Hennes & Cold - The Second Trip (Alex Kid Remix) // Nukleuz vs Tracid Traxxx // // //
Dark Verse Book Advert made With Zbrush & After Effects.

----- Video Demo Reels -----
Turing completeness
universal compiler
machine compiler
Artist Lustmord - Purifying Fire Music for the mind. Prometheus
sketch-based 3D modeling Interface tool performing an extrude.
3D modeling tool
PAGE 2 PAGE 3 PAGE 4 Matters composition is greater then the sum of the parts. Are you a transhuman? To become the knower is a steep hill. form and function
An event more profound than the detonation of the first atomic bomb has happened, the creation of artificial life. Synthia is the nickname for a synthetic bacterium created in 2010 by a team at the J. Craig Venter Institute. The team synthesized a modified version of the Mycoplasma mycoides genome and implanted it into a DNA-free bacterial shell of Mycoplasma capricolum. The resulting organism dubbed "Synthia" was shown to be self-replicating.

"The empires of the future are the empires of the mind." - Sir Winston Churchill Speech at Harvard University, September 6, 1943 Know game theory or bust.
I love all who are like heavy drops falling one by one out of the dark cloud that lowers over man: they herald the coming of the lightning, and perish as heralds. Lo, I am a herald of the lightning, and a heavy drop out of the cloud: the lightning, however, is the Overman!"
WebSite of Concept Artist John Stifter, 3D and 2D digital visual effects artist, Futurist, film maker and writer. I have done it... I have found a way to turn 2d drawings into 3D form. Mark this day March 22 2010. The world has changed! Because I have done it all with trig! what our hands make what our dreams show us.....
"In his house at R'lyeh dead Cthulhu waits dreaming."
Game theory John von Neumann Megadeth Addicted to Chaos
Transhuman Orion
Quantum Cats. JF-2035's portfolio It is very near. epipolar geometry and Wacom pad = art and science. Sketch Based Modeling Mel script Beta coming in October. stay tuned...

"So through endless twilights I dreamed and waited, though I knew not what I waited for. Then in the shadowy solitude my longing for light grew so frantic that I could rest no more, and I lifted entreating hands to the single black ruined tower that reached above the forest into the unknown outer sky. And at last I resolved to scale that tower, fall though I might; since it were better to glimpse the sky and perish, than to live without ever beholding day. " H. P. Lovecraft - The Outsider, 1921 Pluripotency
The Dark Verse
Bar-Sinister might be a hub to American Underground Art and emergent Music. If your looking for the Best industrial and indefinable new Music go to Bar-Sinister. If you listened to Rant Radio back in the day you must go there. The best club I have found so far in LA. ;) ..........................................................
Free Agency without end.
What I'm capable of. It is called "FreeForm modeling" It has be distributied to the common man. Knowledge is free. Noosphere Thoughts on Gravity Skew-Hermitian matrix Cayley transform Arthur Cayley Semi-major axis Scale factor Scaling Scalar Z-transform laplace transform Fourier transform // FOR QUESTIONS EMAIL ME : // ////////////////////////////////////////////////////////////////////////////
/////////////////////////////// Exported from Notepad++
global string $CrvN_And_Vector[],$CurveNames[]; global int $MCurveIndex[],$CRVNameIndex[],$CRVIndexBoundry[]; int $iC,$k,$i,$SizeA,$SizeB,$FindAdjEdgesA[],$FindAdjEdgesB[],$i,$LocalAllAdjEdges[],$LocalAllAdjEdgesB[],$UnionMatch[],$CollectFound[]; string $locslCRVsNames[]; $iC=0; for($iC=0; $iC<`size($CurveNames)`-1; $iC++){ $locslCRVsNames = GetArrayNames($CurveNames,{$iC}); //select -r $locslCRVsNames; PAUSEn(8); $FindAdjEdgesA = AdjEdges($CrvN_And_Vector,$MCurveIndex,$iC,{1,0}); $FindAdjEdgesB = AdjEdges($CrvN_And_Vector,$MCurveIndex,$iC,{0,1}); if((`size($FindAdjEdgesA)`!=0)&&(`size($FindAdjEdgesB)`!=0)){ //print (" size A == 0" + "\n"); $UnionMatch=intArrayExactUnionInt($FindAdjEdgesA,$FindAdjEdgesB); $locslCRVsNames = GetArrayNames($CurveNames,$UnionMatch); select -r $locslCRVsNames; PAUSEn(5); if(`size($UnionMatch)`==0){ //print (" size B == 0" + "\n"); $UnionMatch=intArrayExactUnionInt(VertexsFromEdges($MCurveIndex,$FindAdjEdgesA),VertexsFromEdges($MCurveIndex,$FindAdjEdgesB)); $locslCRVsNames = GetArrayNames($CurveNames,$UnionMatch); select -r $locslCRVsNames; PAUSEn(5); if(`size($UnionMatch)`==0){ clear $LocalAllAdjEdges; clear $LocalAllAdjEdgesB; clear $CollectFound; $SizeA=`size($FindAdjEdgesA)`; $SizeB=`size($FindAdjEdgesB)`; for ( $i = 0; $i <$SizeA; $i++){ for ( $k = 0; $k <$SizeB; $k++){ $LocalAllAdjEdges=AdjEdgesOfEdges($FindAdjEdgesA[$i],$iC); $LocalAllAdjEdgesB=AdjEdgesOfEdges($FindAdjEdgesB[$k],$iC); $UnionMatch=intArrayExactUnionInt($LocalAllAdjEdges,$LocalAllAdjEdgesB); if(`size($UnionMatch)`>0){ $CollectFound[`size($CollectFound)`]=$UnionMatch[0]; $CollectFound[`size($CollectFound)`]=$FindAdjEdgesA[$i]; $CollectFound[`size($CollectFound)`]=$FindAdjEdgesB[$k]; // print (" BOUNDRY i "+$i+" k "+$k + "\n"); $locslCRVsNames = GetArrayNames($CurveNames,{$UnionMatch[0],$FindAdjEdgesA[$i],$iC,$FindAdjEdgesB[$k]}); //select -r $locslCRVsNames; PAUSEn(5); boundary -ch 1 -or 0 -ep 0 -rn 0 -po 0 -ept 0.01 $locslCRVsNames; PAUSEn(1); } clear $LocalAllAdjEdges; clear $LocalAllAdjEdgesB; } } $CollectFound[`size($CollectFound)`]=$iC; $locslCRVsNames = GetArrayNames($CurveNames,$CollectFound); select -r $locslCRVsNames; PAUSEn(5); clear $CollectFound; }else{ print (" !!! size B != 0 !!! " + "\n");} }else{ print (" !!! size A != 0 !!! " + "\n");} } } //print $CollectFound; ////END proc int [] EdgesFromVertex(string $xVec_Two_CRV[],int $vecNumber[]){ int $intCurveList[],$i; for($i=0; $i<`size($vecNumber)`; $i++){ $intCurveList = AppendAllIntArrays($intCurveList, stringToIntArrayX($xVec_Two_CRV[$vecNumber[$i]], ",")); } return $intCurveList; } proc int [] VertexsFromEdges(int $xVecInt[],int $vecNumber[]){ int $i,$Pair[],$LocalPair[]; for($i=0; $i<`size($vecNumber)`; $i++){ $Pair =IndexPairFunc($vecNumber[$i]); $LocalPair=AppendAllIntArrays($LocalPair,{$xVecInt[$Pair[0]],$xVecInt[$Pair[1]]}); } return $LocalPair; } proc int [] AdjEdges(string $xVec_Two_CRV[],int $xVecInt[], int $vecNumber,int $StartEnd[]){ int $LocalPairs[],$diffList[],$ADJ_Edges[]; int $Edges[]; $LocalPairs= VertexsFromEdges($xVecInt,{$vecNumber}); //print (" $LocalPairs "+"\n"); //print $LocalPairs; if($StartEnd[0]+$StartEnd[1]==2){ $diffList= EdgesFromVertex($xVec_Two_CRV,$LocalPairs); }else{ if($StartEnd[0]==1){ $diffList= EdgesFromVertex($xVec_Two_CRV,{$LocalPairs[0]}); }else{ $diffList= EdgesFromVertex($xVec_Two_CRV,{$LocalPairs[1]});} } //print (" $diffList "+"\n"); print $diffList; $ADJ_Edges=intArrayRemoveExact({$vecNumber},$diffList); //print (" $ADJ_Edges "+"\n"); print $ADJ_Edges; return $ADJ_Edges; } proc int [] AdjEdgesOfEdges(int $FindAdj, int $main){ global string $CrvN_And_Vector[]; global int $MCurveIndex[]; int $LocAllAdj[]; $LocAllAdj = AppendAllIntArrays($LocAllAdj, AdjEdges($CrvN_And_Vector,$MCurveIndex,$FindAdj,{1,1})); $LocAllAdj=intArrayRemoveExact({$main},$LocAllAdj); return $LocAllAdj; } proc int [] AppendAllIntArrays(int $A[] , int $B[]){ // print " AppendAllIntArrays " ; print "line 226 "; print "\n" ; int $AB[] =$A; for($eachF in $B){ $AB[`size($AB)`]= $eachF; } return $AB; } proc int [] AdjVertex(string $xVec_Two_CRV[],int $xVecInt[], int $vecNumber){ // int $vecNumber=5; int $i; int $iC=0; int $Pair[],$LocalPair[],$diffList[]; int $intCurveList[]; $intCurveList = stringToIntArrayX($xVec_Two_CRV[$vecNumber], ","); int $ADJ_vecS[]; for ( $i = 0; $i < (`size($intCurveList)`); $i++) { $Pair =IndexPairFunc($intCurveList[$i]); $LocalPair={$xVecInt[$Pair[0]],$xVecInt[$Pair[1]]}; $diffList=intArrayRemoveExact({$vecNumber},$LocalPair); $ADJ_vecS[`size($ADJ_vecS)`]=$diffList[0]; } return $ADJ_vecS; } /////////// proc string [] GetArrayNames(string $names[], int $list[]){ string $array[]; for ( $i = 0; $i < (`size($list)`); $i++) { $array[$i]=$names[$list[$i]]; } return $array; }
Exported from Notepad++
proc string[] ArrayFromAllinString(string $list){ string $ItemB[],$ItemA[],$listA; int $i,$Indexi,$t $i = $Indexi = $t =0; $listA = $list; while ( $t < 1 ) { $i++; $ItemA = {`substring $listA $i $i`}; if ( size($ItemA[0] ) == 0){ $t = 2; } else { appendStringArray($ItemB, $ItemA, 1); } $Indexi++; } return $ItemB; } /* string $FindArray[]={"h","e","l","l","o","j","o","h","n"}; //For a limit when generating random numbers, you may need to adjust this depending on your problem.......... // evolve(50,4,9, 25.9); print $genomes; evolve(int $generations, int $elites, int $tournamentSize, float $mutationRate) evaluateP($genomes); //oeoeoeoeoeoeoeoeoeoeoeoeoeoeoeoeoeoeoedhphepeopoheoingfoiphfphjfogndigpjoiohdnhhhniejeigf joiohdn joiohdnhhhn //### Find = "hellojohn"!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! int $ALL_index[] ={ 7, 7, 15, 21, 1, 12, 10, 26, 14}; int $ALL_index[] ={7, 4, 6, 26, 21, 2, 19, 19, 5}; */ int $reset=0; RunCA( $addOnstate , $OnorOff , $addOnstateOLD , $reset); //clear $GridArray; $GridArray=PrintNiceGrid(sqrt(`size($addOnstate)`),sqrt(`size($addOnstate)`), $addOnstate); print $GridArray; float $OffS ,$OnS; $OnS=1.0; $OffS=0.05; for ($i=0; $i <`size($Locaters)`; $i++){ if(($OnorOff[$i]==1)){ setAttr ($Locaters[$i]+".scale") $OnS $OnS $OnS; SetGridItemColorCAString($Locaters[$i], $addOnstate[$i]+$iClo ); }else{ SetGridItemColorCAString($Locaters[$i], -1 ); setAttr ( $Locaters[$i]+".scale") $OffS $OffS $OffS; } } $Survivals ={0,3,4,5}; $Births = {2,6}; $iClo=7; int $ABC[]; $ABC=CreateIntIndex(9) int $Set[]; $Set={0,1,0,1,0,1,0,1,1}; CreateSet($Set) proc int [] CreateSet(int $Set[]){ int $Value[]; int $ABC[]; $ABC={0,1,2,3,4,5,6,7,8}; int $i,$k; $k=0; for ($i=0; $i <9; $i++){ if($Set[$i]==1){ $Value[$k]=$ABC[$i]; $k++; } } return $Value; } global string $LETTERS[]; string $ABCs = "_abcdefghijklmnopqrstuvwxyz_"; ////////print $LETTERS $LETTERS = ArrayFromAllinString($ABCs); /* string $SetSx; $SetSx=010101011; $LETTERS = ArrayFromAllinString($SetSx); */ string $Find = "hello_john_nice_night_a_very_nice_night_indeed"; string $FindArray[]=ArrayFromAllinString($Find); //{"h","e","l","l","o","j","o","h","n"};// global string $gFind[]; $gFind=$FindArray; int $SizeF = `sizeBytes $Find `; // int $SizeF = `sizeBytes $SetSx `; int $SizeABCs = $SizeF; int $SizeABCs = `sizeBytes $ABCs `; global int $gGenomeLength; global int $gPopulationSize; global int $gMaxGeneValue; global float $fitness[]; $gGenomeLength = $SizeF; global int $grid; $grid=$gPopulationSize = $SizeF; $gMaxGeneValue =$SizeF; $gMaxGeneValue =$SizeABCs-1; global float $genomes[]; $gMaxGeneValue=$SizeABCs; $genomes = `initializePopulation`; print $genomes; /* //evolve(int $generations, int $elites, int $tournamentSize, float $mutationRate) evolve(22,1,5,1); $GridArray=PrintNiceGrid(sqrt(`size($OnorOff)`),sqrt(`size($OnorOff)`), $OnorOff); print $GridArray; print $Survivals; for ($i=0; $i <`size($Locaters)`; $i++){ if(($addOnstate[$i]>0)){ setAttr ($Locaters[$i]+".scale") $OnS $OnS $OnS; SetGridItemColorCAString($Locaters[$i], $addOnstate[$i]+$iClo ); }else{ SetGridItemColorCAString($Locaters[$i], -1 ); setAttr ( $Locaters[$i]+".scale") $OffS $OffS $OffS; } } print $fitness; // zellojjohnonice_fight hahhahaha! print $genomes; print $gMaxGeneValue */ proc float [] evaluateP(float $genomes[]){ global int $grid; global int $Survivals[]; global float $fitness[]; global int $gPopulationSize; global int $gGenomeLength; global string $gFind[]; float $Percent_total,$Percent_foundF,$F_total; string $GENOME_LETTERS=""; global string $LETTERS[]; int $j,$i,$iii; $j=1; $F_total= $gGenomeLength; $iii=0; string $LettersPrint=""; float $fitnessx[]; $fitnessx=$fitness; $Percent_foundF=0; int $Set[],$RuleX[]; int $reset=1; for($j=0; $j<$gPopulationSize; $j++){ $Percent_foundF=0; //eval for($i=0; $i<$F_total; $i++){ $Set[$i]=$GENOME_LETTERS= $LETTERS[int(abs($genomes[Grid_NxNxN_GetIndex($j,$i,$grid)]))]; $LettersPrint+=$GENOME_LETTERS; } $RuleX=CreateSet($Set); $Survivals=$RuleX; $Percent_foundF=RunCA($reset); if(($Percent_foundF<80)&&($Percent_foundF>20)){$fitnessx[$iii] += 1;}else{$fitnessx[$iii] += 0;} print $LettersPrint; print("\n"); $LettersPrint=""; $iii++; $Percent_foundF=0; } $fitness=$fitnessx; return $fitnessx; } /////// proc float [] evaluateP(float $genomes[]){ global int $grid; global float $fitness[]; global int $gPopulationSize; global int $gGenomeLength; global string $gFind[]; float $Percent_total,$Percent_foundF,$F_total; string $GENOME_LETTERS=""; global string $LETTERS[]; int $j,$i,$iii; $j=1; $F_total= $gGenomeLength; $iii=0; string $LettersPrint=""; float $fitnessx[]; $fitnessx=$fitness; $Percent_foundF=0; for($j=0; $j<$gPopulationSize; $j++){ $Percent_foundF=0; //eval for($i=0; $i<$F_total; $i++){ $GENOME_LETTERS= $LETTERS[int(abs($genomes[Grid_NxNxN_GetIndex($j,$i,$grid)]))]; $LettersPrint+=$GENOME_LETTERS; if($GENOME_LETTERS==$gFind[$i]){ $Percent_foundF+= 1;} if($Percent_foundF>0){$fitnessx[$iii] = $Percent_total= $Percent_foundF/$F_total; }else{$fitnessx[$iii] +=$Percent_total =0; } } if($Percent_foundF<=0){$fitnessx[$iii] += -1;} print $LettersPrint; print("\n"); $LettersPrint=""; $iii++; $Percent_foundF=0; } $fitness=$fitnessx; return $fitnessx; } proc float [] initializePopulation(){ global int $grid; global int $gPopulationSize; global int $gGenomeLength; global int $gMaxGeneValue; global float $genomes[]; int $i, $j; //set gene for($i=0; $i<$gPopulationSize; $i++){ for($j=0; $j<$gGenomeLength; $j++) $genomes[Grid_NxNxN_GetIndex($i,$j,$grid)] = int(rand ($gMaxGeneValue)); } return $genomes; } //fitness function proc evolve(int $generations, int $elites, int $tournamentSize, float $mutationRate){ global int $grid; global float $genomes[]; global string $LETTERS[]; global int $gPopulationSize; global int $gGenomeLength; //matrix $genomes[10][9] global float $fitness[]; float $Tempfitness[]; int $i; for($i=0; $i<$generations; $i++){ $fitness = evaluateP($genomes); $Tempfitness=$fitness; $genomes = breed($genomes,$Tempfitness, $elites, $tournamentSize, $mutationRate); } int $iii=0; int $indexX[]; for($j=0; $j<$gPopulationSize; $j++){ for($i=0; $i<$gGenomeLength; $i++){ $indexX[$iii]=int(abs($genomes[Grid_NxNxN_GetIndex($j,$i,$grid)])); $iii++; } } } //population next generation proc float [] breed(float $genomes[], float $xitnessX[], int $elites, int $tournamentSize, float $mutationRate){ global int $grid; global int $gPopulationSize; global int $gGenomeLength; global float $fitness[]; $fitness=$xitnessX; float $newGenomes[]; // next gen int $lowest = 0; // new generation for($i=0; $i<$elites; $i++){ $lowest = findLowestFitness($lowest); for($j=0; $j<$gGenomeLength; $j++) $newGenomes[Grid_NxNxN_GetIndex($i,$j,$grid)] = $genomes[Grid_NxNxN_GetIndex($lowest,$j,$grid)]; } //selection for($i=$elites; $i<$gPopulationSize; $i++){ int $father = tournamentSelect($tournamentSize); int $mother = tournamentSelect($tournamentSize); $newGenomes = crossover($genomes, $newGenomes, $father, $mother, $i); } return mutate($newGenomes, $elites, $mutationRate); } //minimum proc int findLowestFitness(int $lowest){ global float $fitness[]; int $i, $index; float $indexFitness = 100000; //Very high number float $lowestFitness = $fitness[$lowest]; int $All_index[]; $All_index=SortNumbersIntIndex($fitness); $index=$All_index[0]; $index=$All_index[`size($All_index)`-1]; return $index; } proc int tournamentSelect(int $tournamentSize) { global int $gPopulationSize; global float $fitness[]; int $i; int $parent = (int)rand ($gPopulationSize), $i, $tmp; for($i=1; $i<$tournamentSize; $i++){ $tmp = (int)rand ($gPopulationSize); if($fitness[$tmp]>$fitness[$parent]){ //Fitness minimization $parent = $tmp; } } return $parent; } proc float [] crossover(float $genomes [], float $newGenomes[], int $father, int $mother, int $child){ global int $grid; global int $gGenomeLength; int $i, $crossoverPoint = (int)rand ($gGenomeLength); for($i=0; $i<$crossoverPoint; $i++) $newGenomes[Grid_NxNxN_GetIndex($child,$i, $grid)] = $genomes[Grid_NxNxN_GetIndex($father,$i, $grid)]; for($i=$crossoverPoint; $i<$gGenomeLength; $i++) $newGenomes[Grid_NxNxN_GetIndex($child,$i, $grid)] = $genomes[Grid_NxNxN_GetIndex($mother,$i,$grid)]; return $newGenomes; } proc float [] mutate(float $newGenomes[], int $elites, float $mutationRate){ global int $grid; global int $gPopulationSize; global int $gGenomeLength; int $i, $j; for($i=$elites; $i<$gPopulationSize; $i++){ for($j=0; $j<$gGenomeLength; $j++){ if($mutationRate>`rand 100.0`){ // Mutate if(`rand 2.0`>1.0) $newGenomes[Grid_NxNxN_GetIndex( $i, $j , $grid)] = `rand ($grid-1)` ; else $newGenomes[Grid_NxNxN_GetIndex( $i, $j , $grid)] = `rand ($grid-1)` ; } } } return $newGenomes; } global int $grid; proc int Grid_NxNxN_GetIndex(int $i, int $j , int $grid){ // int $gridX = $grid; int $gridY = $grid; int $Xi = fmod($i,$gridX); int $Yi = fmod($j,$gridY)+1; int $row = $Yi * $gridY; int $rowB = $Yi * ($gridY-1); int $place =((($row+$Xi)-$gridY)); return $place; } proc int [] Index_NxNxN_GetGrid(int $place, int $grid){ float $findXY = float($place)/ float ($grid) ; float $val = (int)($findXY); float $findXYb = (($val*$grid)/$grid); float $findXYa = ((-1*$findXYb*$grid)+$place)+$grid; float $IndexA = fmod($findXYa,$grid); float $IndexB = fmod($findXYb,$grid); int $XY[]; $XY = {int ($IndexA), int ($IndexB)}; return $XY; } vector $PxyzII[]; string $Loc_pointZ[]; string $CURVES_Stree[]; $Loc_pointZ=`ls-sl`; $PxyzII=PointArrayT($Loc_pointZ); string $CURVES_Stree[]; $CURVES_Stree=Stree3D($PxyzII,250); int $VecInt[]; $CURVES_Stree=Stree3DIndex($PxyzII,250,$VecInt); print $VecInt; // for each pair remove int from index int $iC=0; int $Pair[],$LocalPair[],$diffList[],$IndexPts[]; $IndexPts = CreateIntIndex(`size($PxyzII)`); $Pair =IndexPairFunc($iC); $LocalPair={$VecInt[$Pair[0]],$VecInt[$Pair[1]]} $diffList=intArrayRemoveExact($LocalPair,$IndexPts); int $j, $i; for ( $i = 0; $i < (`size($diffList)`); $i++) { $diffListB=intArrayRemoveExact($diffList[$i],$diffList); $j=IsPointArray_in_ThreePointCircle_Global({$LocalPair[0],$LocalPair[0],$diffList[$i]},$diffListB); } ////////// global float $x0WXYZ[]; global float $LearningCoeff; global float $Threshold; global float $Out; global int $SizeTron; global int $SizeDimentions; global float $localError; $SizeDimentions=3; ////////////////////////////////////////////////// clear $x0WXYZ; vector $NeuronVec[]; string $CurveItemX[]=`ls -sl`; $NeuronVec = VecCurveCvs( $CurveItemX[0]); $SizeTron=`size($NeuronVec)`; //print $NeuronVec; float $outputR[]; $outputR=CPerceptron_Output($NeuronVec); CPerceptron_CPerceptronZ(); $AllValues=""; for($i=0; $i<$SizeDimentions; $i++){ $AllValues=($AllValues+$x0WXYZ[$i]+" , ");} $AllValues=($AllValues+"\n"); print $AllValues; $CurveItemX =`ls -sl`; $NeuronVec = VecCurveCvs( $CurveItemX[0]); $localError=1; $outputR=CPerceptronZ_Train($NeuronVec, 1.0000); 0.1811089344 , 0.3556191124 , 0.4547299137 , 0.7584331156 , 0.2780546245 , 0.345945621 , 0.1624187718 , // 0.8021090934 , 0.8907904716 , 0.1036644868 , 5.602109093 , 0.8907904716 , 6.103664487 , // Result: 16.806327 16.806327 59.531979 // 5.602109093 , 0.8907904716 , 6.103664487 , 6.602109093 , 1.090790472 , 7.503664487 , 6.602109093 , 1.090790472 , 7.503664487 , // Result: 16.806327 16.806327 59.531979 // // Result: 19.806327 20.351723 69.125542 // CPerceptron_Output($NeuronVec); int $ii; clear $outputR; $AllValues=""; for($ii=0; $ii<22; $ii++){ $AllValues=""; if($localError==0){ for($i=0; $i<3; $i++){ $AllValues=($AllValues+$x0WXYZ[$i]+" , "); } $AllValues=($AllValues+"\n"); print $AllValues; break; }else{ //print (" output "+"\n"); $outputR=CPerceptronZ_Train($NeuronVec, 1.0000); for($i=0; $i<3; $i++){ $AllValues=($AllValues+$x0WXYZ[$i]+" , "); } $AllValues=($AllValues+"\n"); print $AllValues; } } /////END print (" cycles "+"\n"); print $ii; print (" cycles "+"\n"); $AllValues=""; for($i=0; $i<3; $i++){$AllValues=($AllValues+$x0WXYZ[$i]+" , ");} $AllValues=($AllValues+"\n"); print $AllValues; CPerceptron_Output($NeuronVec); //////////////////////////// CPerceptron_CPerceptronZ(); $SizeTron=10; $SizeDimentions=3; float $xyN[]; for($i=0; $i<$SizeTron; $i++){ for($d=0; $d<$SizeDimentions; $d++){ $xyN[((($i)*$SizeDimentions)+($d+1))-1]=float(($i)*$SizeDimentions)+($d+1); } } print $xyN; for($ii=0; $ii<222; $ii++){ $localError=1; $outputR=CPerceptronZ_Train($xyN, 1.0000); } CPerceptron_Output($xyN); //////////////// proc float [] CPerceptronZ_Train(float $VecList[], float $r){ global float $x0WXYZ[]; global float $LearningCoeff; global float $Threshold; global float $Out; global int $SizeTron; global int $SizeDimentions; global float $localError; float $Sum,$N; float $Result[]; float $Output; float $Correction; float $Error; int $i,$d,$p; $Output=$localError=0; // vector $VecList[]= $NeuronVec; float $r =0.99; //$outputR=CPerceptronZ_Train ($NeuronVec, 0.99); // size is the Number of values that are being trained the vector LIST .. SizeDimentions are each X.. y.. z.. w... //how many Dimentions 2d 3d 4d? //this is the global weights in 2D there are TWO weights 3D three weights for XY & Z //$SizeDimentions= `size($x0WXYZ)`; float $xN[]; for ($p = 0; $p < $SizeTron; $p++){ // Calculate $output. for($d=0; $d<$SizeDimentions; $d++){ $xN[$d]=$VecList[((($p)*$SizeDimentions)+($d+1))-1]; } $Sum =0; for($d=0; $d<$SizeDimentions; $d++){ $Sum += ($xN[$d]*$x0WXYZ[$d]); $Result[$d]=$Out = $Sum; $Out=$N=(($Sum >= 1) ? 1.0 : -1.0); } //print (" Sum " + $Sum + "\n"); $Sum +=((-1)*$Threshold); //print (" $Sum +=((-1)*$Threshold) " + $Sum + "\n"); $Sum=Sigmoid($Sum); //print (" $Sum=Sigmoid($Sum) " + $Sum + "\n"); $Out= $N=(( $Sum >= 1) ? 1.0 : -1.0); //$Out= Output($weights, VecCom($inputs[$p],0),VecCom($inputs[$p],1)); // Calculate error. $Error = 1.0 - $Out; //place $Sum? if ($Error != 0){ // print ("($Error != 0) " + $Error +"\n"); // Update $weights. for ($i = 0; $i < $SizeDimentions; $i++){ $x0WXYZ[$i] += $LearningCoeff*$Error * $xN[$i]; } } // Convert error to absolute value. $localError += abs($Error); } print (" localError = " + $localError +"\n"); if($localError==0){ print (" localError = ZERO !" + $localError +"\n");} print (" AllValues = " + "\n"); string $AllValues=""; for($i=0; $i<$SizeDimentions; $i++){ $AllValues=($AllValues+$x0WXYZ[$i]+" , ");} $AllValues=($AllValues+"\n"); print $AllValues; return $Result; } proc float [] CPerceptron_Output(float $VecList[]){ global float $x0WXYZ[]; global float $LearningCoeff; global float $Threshold; global float $Out; global int $SizeTron; global int $SizeDimentions; global float $localError; float $Sum; float $Result[]; float $Output; float $Correction; float $Error; int $i,$d; // $localError=0; // vector $VecList[]= $NeuronVec; float $r =0.99; //$outputR=CPerceptronZ_Train($NeuronVec, 0.99); // size is the Number of values that are being trained the vector LIST .. SizeDimentions are each X.. y.. z.. w... //how many Dimentions 2d 3d 4d? //this is the global weights in 2D there are TWO weights 3D three weights for XY & Z float $xN[]; float $N; for($i=0; $i<$SizeTron; $i++){ //local item pair in list turned float.. for($d=0; $d<$SizeDimentions; $d++){ $xN[$d]=$VecList[((($i)*$SizeDimentions)+($d+1))-1]; } $Sum =0; for($d=0; $d<$SizeDimentions; $d++){ $Sum += ($xN[$d]*$x0WXYZ[$d]); $Out = $Sum; //$Result[$d]=Sigmoid(abs($Sum)); } $Sum +=((-1)*$Threshold); //print (" $Sum +=((-1)*$Threshold) " + $Sum + "\n"); $Sum=Sigmoid($Sum); //print (" $Sum=Sigmoid($Sum) " + $Sum + "\n"); $Out= $N=(($Sum >= 1) ? 1.0 : -1.0); //$Result[$i]=$Out; // $Result[$i]=Sigmoid($Sum); //if(abs(abs(Sigmoid($Sum))-1.0)<$Threshold){ if( $Out==1){ $Output = 1.0; }else{ $Output = 0.0;} $Result[$i]=$Output; // $N=(($Sum >= 0) ? 1 : -1); // $Result[$i]=$N; } //print $Result; return $Result; } //Sigmoid function proc float CPerceptron_Sigmoid(float $x){ float $S = (1.0/(1.0+`exp(-$x)`)); return $S; } proc float Sigmoid(float $x){ float $S = (1.0/(1.0+`exp(-$x)`)); return $S; } //Setting up parameters proc CPerceptron_CPerceptronZ(){ global float $x0WXYZ[]; global float $LearningCoeff; global float $Threshold; global float $Out; global int $SizeTron; global int $SizeDimentions; //srand((unsigned)(time(NULL))); $LearningCoeff = 0.02; $Threshold = 0.5; for($i=0; $i<$SizeDimentions; $i++){ $x0WXYZ[$i] = abs((float)(rand(32007))/(32767/2) - 1); } } //Old but worked proc float [] CPerceptronZ_Train(vector $VecList[], float $r){ global float $x0WXYZ[]; global float $LearningCoeff; global float $Threshold; global float $Out; global int $SizeTron; global int $SizeDimentions; global float $localError; float $Sum; float $Result[]; float $Output; float $Correction; float $Error; int $i,$d; $localError=0; $SizeTron = size($VecList); $SizeDimentions= `size($x0WXYZ)`; float $xN[]; for($i=0; $i<$SizeTron; $i++){ //local item pair in list turned float $xN=$VecList[$i]; $Sum =0; for($d=0; $d<$SizeDimentions; $d++){ $Sum += ($xN[$d]*$x0WXYZ[$d]); $Out = $Sum; $Result[$d]= abs($Out); } $Sum +=((-1)*$Threshold); if( $Sum>$Threshold){ $Output = 1.0; }else{ $Output = 0.0;} ///print ("$Sum>$Threshold " + $Sum+" > "+ $Threshold +"\n"); $Error = ((float)$r)-($Output); $Correction = $LearningCoeff*$Error; for($d=0; $d<$SizeDimentions; $d++){ if($Result[$d = ((float)$r)-($Output); $Correction = $LearningCoeff*$Error; for($d=0; $d<$SizeDimentions; $d++){ if($Result[$d]!=1){ $x0WXYZ[$d] += $Correction; $localError+=$Correction;} } } if($localError==0){ print (" localError = " + $localError +"\n");} return $Result; } //Old but worked^ proc float VecCom(vector $Vai, int $XYZ){ //#print " VecCom " ; print "line 619 "; print "\n" ; float $x, $y, $z; $x = $Vai.x; $y = $Vai.y; $z = $Vai.z; float $N; if($XYZ==0){$N=$x;} if($XYZ==1){$N=$y;} if($XYZ==2){$N=$z;} return $N; } proc float [] cycleFloatArray(float $IntList[]){ int $SizeS,$i; float $IntListC[]; $SizeS=`size($IntList)`-1; $IntListC[0]=$IntList[$SizeS]; for($i=0; $i<$SizeS; $i++){ $IntListC[$i+1]=$IntList[$i]; } return $IntListC; } proc float [] CPerceptronZ_Train(vector $VecList[], float $r){ global float $x0WXYZ[]; global float $LearningCoeff; global float $Threshold; global float $Out; global int $SizeTron; global int $SizeDimentions; global float $localError; float $Sum; float $Result[]; float $Output; float $Correction; float $Error; int $i,$d; $Output=$localError=0; // vector $VecList[]= $NeuronVec; float $r =0.99; //$outputR=CPerceptronZ_Train ($NeuronVec, 0.99); // size is the Number of values that are being trained the vector LIST .. SizeDimentions are each X.. y.. z.. w... $SizeTron = size($VecList); //how many Dimentions 2d 3d 4d? //this is the global weights in 2D there are TWO weights 3D three weights for XY & Z $SizeDimentions= `size($x0WXYZ)`; float $xN[]; for($i=0; $i<$SizeTron; $i++){ //local item pair in list turned float.. $xN=$VecList[$i]; $Sum =0; if($SizeDimentions>3){$xN[3]=mag($VecList[$i]);} for($d=0; $d<$SizeDimentions; $d++){ $Sum += ($xN[$d]*$x0WXYZ[$d]); // print ("$Sum += ($xN[$d]*$x0WXYZ[$d]) : " + $Sum + " += " + $xN[$d] + " * " + $x0WXYZ[$d] + "\n"); $Out = $Sum; $Result[$d]= Sigmoid(abs($Out)); //$Result[$d]= $Out; } $Sum +=((-1)*$Threshold); //$Result[$i]=$Out; //$Result[$i]=Sigmoid($Sum);//print $Threshold; //if(abs(abs(Sigmoid($Sum))-1.0)<0.0001){ $Sum=Sigmoid($Sum); if(Sigmoid($Out)>$Threshold){ $Output = 1.0 ; }else{ $Output = 0.0;} //+(abs(abs(Sigmoid($Sum))-1.0)); print ("Sum>Threshold " + $Sum + " > " + $Threshold + " Output = " + $Output + "\n"); print ("Error = " + "\n"); $Error = ((float)$r)-($Output); print ("Error = " + $Error +" ((float)$r)-($Output)"+ $r +"/ - "+ $Output + "\n"); $Correction = $LearningCoeff*$Error; for($d=0; $d<$SizeDimentions; $d++){ if($Result[$d]!=1){ $x0WXYZ[$d] += $Correction; $localError+=$Correction;} } } print (" localError = " + $localError +"\n"); if($localError==0){ print (" localError = ZERO !" + $localError +"\n");} print (" AllValues = " + "\n"); string $AllValues=""; for($i=0; $i<$SizeDimentions; $i++){ $AllValues=($AllValues+$x0WXYZ[$i]+" , ");} $AllValues=($AllValues+"\n"); print $AllValues; return $Result; } proc float [] CPerceptronZ_Train(vector $VecList[], float $r){ global float $x0WXYZ[]; global float $LearningCoeff; global float $Threshold; global float $Out; global int $SizeTron; global int $SizeDimentions; global float $localError; float $Sum,$N; float $Result[]; float $Output; float $Correction; float $Error; int $i,$d,$p; $Output=$localError=0; // vector $VecList[]= $NeuronVec; float $r =0.99; //$outputR=CPerceptronZ_Train ($NeuronVec, 0.99); // size is the Number of values that are being trained the vector LIST .. SizeDimentions are each X.. y.. z.. w... $SizeTron = size($VecList); //how many Dimentions 2d 3d 4d? //this is the global weights in 2D there are TWO weights 3D three weights for XY & Z $SizeDimentions= `size($x0WXYZ)`; float $xN[]; for ($p = 0; $p < $SizeTron; $p++){ // Calculate $output. $xN=$VecList[$p]; $Sum =0; if($SizeDimentions>3){$xN[3]=mag($VecList[$i]);} for($d=0; $d<$SizeDimentions; $d++){ $Sum += ($xN[$d]*$x0WXYZ[$d]); $Result[$d]=$Out = $Sum; $Out=$N=(($Sum >= 1) ? 1.0 : -1.0); } print (" Sum " + $Sum + "\n"); $Sum +=((-1)*$Threshold); print (" $Sum +=((-1)*$Threshold) " + $Sum + "\n"); $Sum=Sigmoid($Sum); print (" $Sum=Sigmoid($Sum) " + $Sum + "\n"); $Out= $N=(( $Sum == 1) ? 1.0 : -1.0); //$Out= Output($weights, VecCom($inputs[$p],0),VecCom($inputs[$p],1)); // Calculate error. $Error = 1.0 - $Out; if ($Error != 0){ print ("($Error != 0) " + $Error +"\n"); // Update $weights. for ($i = 0; $i < $SizeDimentions; $i++){ $x0WXYZ[$i] += $LearningCoeff*$Error * $xN[$i]; } } // Convert error to absolute value. $localError += abs($Error); } print (" localError = " + $localError +"\n"); if($localError==0){ print (" localError = ZERO !" + $localError +"\n");} print (" AllValues = " + "\n"); string $AllValues=""; for($i=0; $i<$SizeDimentions; $i++){ $AllValues=($AllValues+$x0WXYZ[$i]+" , ");} $AllValues=($AllValues+"\n"); print $AllValues; return $Result; } proc float [] CPerceptron_Output(vector $VecList[]){ global float $x0WXYZ[]; global float $LearningCoeff; global float $Threshold; global float $Out; global int $SizeTron; global int $SizeDimentions; global float $localError; float $Sum; float $Result[]; float $Output; float $Correction; float $Error; int $i,$d; // $localError=0; // vector $VecList[]= $NeuronVec; float $r =0.99; //$outputR=CPerceptronZ_Train($NeuronVec, 0.99); // size is the Number of values that are being trained the vector LIST .. SizeDimentions are each X.. y.. z.. w... $SizeTron = size($VecList); //how many Dimentions 2d 3d 4d? //this is the global weights in 2D there are TWO weights 3D three weights for XY & Z $SizeDimentions= `size($x0WXYZ)`; float $xN[]; float $N; for($i=0; $i<$SizeTron; $i++){ //local item pair in list turned float.. $xN=$VecList[$i]; if($SizeDimentions>3){$xN[3]=mag($VecList[$i]);} $Sum =0; for($d=0; $d<$SizeDimentions; $d++){ $Sum += ($xN[$d]*$x0WXYZ[$d]); $Out = $Sum; //$Result[$d]=Sigmoid(abs($Sum)); } ////print (" Sum " + $Sum + "\n"); $Sum +=((-1)*$Threshold); //print (" $Sum +=((-1)*$Threshold) " + $Sum + "\n"); $Sum=Sigmoid($Sum); //print (" $Sum=Sigmoid($Sum) " + $Sum + "\n"); $Out= $N=(($Sum >= 1) ? 1.0 : -1.0); //$Result[$i]=$Out; // $Result[$i]=Sigmoid($Sum); //if(abs(abs(Sigmoid($Sum))-1.0)<$Threshold){ if( $Out==1){ $Output = 1.0; }else{ $Output = 0.0;} $Result[$i]=$Output; // $N=(($Sum >= 0) ? 1 : -1); // $Result[$i]=$N; } //print $Result; return $Result; } /* CPerceptron_Output($NeuronVec); proc string Loc(float $Points[]){ string $LocObjects[] = `polyCube -w 0.5 -h 0.5 -d 1 -sx 1 -sy 1 -sz 1 -ax 0 1 0 -cuv 4 -ch 0`; //string $LocObjects[] = `spaceLocator -p 0 0 0`; setAttr ($LocObjects[0]+".translate") $Points[0] $Points[1] $Points[2]; return $LocObjects[0]; } //$Threshold = 0.5; //$Threshold /=1.09; int $ix=0; float $output[]; float $x,$y,$z; $x=$y=$z=0.0; // Display network generalisation.// string $tempLocs[]; clear $tempLocs; float $step=4.25; float $scaleBox=15.0; for ( $x = -$scaleBox; $x <= $scaleBox; $x += $step){ for ( $y = -$scaleBox; $y <= $scaleBox; $y += $step){ for ( $z = -$scaleBox; $z <= $scaleBox; $z += $step){ // Calculate $output.// $output=CPerceptron_Output({<<$x,$y,$z>>}); if($output[0] == 1){ //$tempLocs[`size($tempLocs)`]=Loc(<<$x,$y,$z>>); //editDisplayLayerMembers -noRecurse layer2 $tempLocs[`size($tempLocs)`-1]; //print ($x+" "+ $y+" Blue"+"\n"); }else{ $tempLocs[`size($tempLocs)`]=Loc(<<$x,$y,$z>>); editDisplayLayerMembers -noRecurse layer1 $tempLocs[`size($tempLocs)`-1]; //print ($x+" "+ $y+" Red"+"\n"); $ix++; } if($ix>9000){break;} } if($ix>9000){break;} }if($ix>9000){break;} } //////////////// */ /* CPerceptron_Output($NeuronVec); proc string Loc(float $Points[]){ string $LocObjects[] = `polyCube -w 0.5 -h 0.5 -d 1 -sx 1 -sy 1 -sz 1 -ax 0 1 0 -cuv 4 -ch 0`; //string $LocObjects[] = `spaceLocator -p 0 0 0`; setAttr ($LocObjects[0]+".translate") $Points[0] $Points[1] $Points[2]; return $LocObjects[0]; } //$Threshold = 0.5; //$Threshold /=1.09; int $ix=0; float $output[]; float $x,$y,$z; $x=$y=$z=0.0; // Display network generalisation.// string $tempLocs[]; clear $tempLocs; float $step=4.25; float $scaleBox=15.0; for ( $x = -$scaleBox; $x <= $scaleBox; $x += $step){ for ( $y = -$scaleBox; $y <= $scaleBox; $y += $step){ for ( $z = -$scaleBox; $z <= $scaleBox; $z += $step){ // Calculate $output.// $output=CPerceptron_Output({<<$x,$y,$z>>}); if($output[0] == 1){ //$tempLocs[`size($tempLocs)`]=Loc(<<$x,$y,$z>>); //editDisplayLayerMembers -noRecurse layer2 $tempLocs[`size($tempLocs)`-1]; //print ($x+" "+ $y+" Blue"+"\n"); }else{ $tempLocs[`size($tempLocs)`]=Loc(<<$x,$y,$z>>); editDisplayLayerMembers -noRecurse layer1 $tempLocs[`size($tempLocs)`-1]; //print ($x+" "+ $y+" Red"+"\n"); $ix++; } if($ix>9000){break;} } if($ix>9000){break;} }if($ix>9000){break;} } //////////////// */
/////////////// ///////////////////////////////

Billions of years ago, and billions of light years away, the material at the center of a galaxy collapsed towards a super-massive black hole, and then intense magnetic fields directed some of the energy of that gravitational collapse, and some of the matter, back out in the form of tremendous jets which illuminated lobes with the brilliance of a trillion suns.

We’ve been told to go out on a limb, and say something surprising. So I’ll try and do that. But I want to start with two things that everyone already knows. And the first one, in fact, is something that has been known for most of recorded history. And that is that the planet Earth, or the solar system, or our environment, or whatever, is uniquely suited to sustain our evolution, or creation as it used to be thought, and our present existence, and most important, our future survival.

Nowadays this idea has a dramatic name: Spaceship Earth. And the idea there is that outside the spaceship, the universe is implacably hostile, and inside is all we have, all we depend on. And we only get the one chance- if we mess up our spaceship, we’ve got nowhere else to go. Now the second thing that everyone already knows is that, contrary to what was believed for most of human history, human beings are not, in fact, the hub of existence. And Stephen Hawking famously said, we’re just a chemical scum on the surface of a typical planet, that’s in orbit around a typical star, which is on the outskirts of a typical galaxy, and so on.
Now the first of those two things that everyone knows, is kind of saying that we’re at a very un-typical place, uniquely suited, and so on, and the second one is saying that we’re at a typical place, and especially if you regard these two as deep truths to live by, and to inform your life decisions, then they seem a little bit to conflict with each other. But that doesn’t prevent them from both being completely false. (laughter) And they are. So let me start with the second one.
Typical. Well- Is this a typical place? Well, let’s look around, you know, and look in a random direction, and we see a wall, and chemical scum, (laughter) and that’s not typical of the universe at all. All you’ve got to do is go a few hundred miles in that same direction (points skyward) and look back, and you won’t see any walls, or chemical scum at all, all you see is a blue planet. And if you go further than that, you’ll see the sun, the solar system, and the stars, and so on. But that’s still not typical of the universe, because stars come in galaxies. And most places in the universe, a typical place in the universe, is nowhere near any galaxies. So let’s go further, till we’re outside the galaxy, and look back, and yeah, there’s the huge galaxy with spiral arms laid out in front of us. And at this point we’ve come 100,000 light years from here. But we’re still nowhere near a typical place in the universe. To get to a typical place, you’ve got to go 1,000 times as far as that, into intergalactic space. And so what does that look like? What does a typical place in the universe look like?
Well, at enormous expense, TED has arranged a high resolution immersion virtual reality rendering of intergalactic space- the view from intergalactic space. So can we have the lights off, please, so we can see it? (lights go out, all is dark except for a couple of computer screens) Well, not quite, not quite perfect- you see, in intergalactic space, intergalactic space is completely dark- pitch dark. It’s so dark, that if you were to be looking at the nearest star to you, and that star were to explode as a supernova, and you were to be staring directly at it at the moment when its light reached you, you still wouldn’t be able to see even a glimmer. That’s how big, and how dark, the universe is. And that’s despite the fact that a supernova is so bright, so brilliant, an event, that it would kill you stone dead at a range of several light years. And yet, from intergalactic space, it’s so far away, you wouldn’t even see it. It’s also very cold out there- less than 3 degrees above absolute zero. And it’s very empty. The vacuum there is one million times less dense than the highest vacuum that our best technology on Earth can currently create. So that’s how different a typical place is from this place. And that is how un-typical this place is. So can we have the lights back on please? Thank you.
Now how do we know about an environment that’s so far away, and so different, and so alien, from anything we’re used to? Well, the Earth, our environment, in the form of us, is creating knowledge. Well, what does that mean? Well, look out even further than we’ve just been- I mean from here, with a telescope- and you’ll see things that look like stars. They’re called quasars. Quasars originally meant quasi-stellar object. Which means things that look a bit like stars. And- (laughter) But they’re not stars. And we know what they are. Billions of years ago, and billions of light years away, the material at the center of a galaxy collapsed towards a super-massive black hole, and then intense magnetic fields directed some of the energy of that gravitational collapse, and some of the matter, back out in the form of tremendous jets which illuminated lobes with the brilliance of, I think it’s a trillion suns.
Now, the physics of the human brain could hardly be more unlike the physics of such a jet. We couldn’t survive for an instant in it. Language breaks down when trying to describe what it would be like in one of those jets. It would be a bit like experiencing a supernova explosion, but at point-blank range and for millions of years at a time. (laughter) And yet, that jet happened in precisely such a way, that billions of years later, on the other side of the universe, some bit of chemical scum could accurately describe, and model, and predict, and explain, above all- there’s your reference- what was happening there, in reality. The one physical system, the brain, contains an accurate working model of the other- the quasar. Not just a superficial image of it, though it contains that as well, but an explanatory model, embodying the same mathematical relationships and the same causal structure. Now that is knowledge. And if that weren’t amazing enough, the faithfulness with which the one structure resembles the other is increasing with time. That is the growth of knowledge. So, the laws of physics have this special property. That physical objects, as unlike each other as they could possibly be, can nevertheless embody the same mathematical and causal structure, and to do it more and moreso over time. So we are a chemical scum that is different. This chemical scum has universality. Its structure contains, with ever-increasing precision, the structure of everything. This place, and not other places in the universe, is a hub which contains within itself the structural and causal essence of the whole of the rest of physical reality. And so far from being insignificant, the fact that the laws of physics allow this, or even mandate that this can happen, is one of the most important things about the physical world.
Now how does the solar system- and our environment, in the form of us- acquire this special relationship with the rest of the universe? Well, one thing that’s true about Stephen Hawking’s remark- I mean, it is true, but it’s the wrong emphasis. One thing that’s true about it is that it doesn’t do it with any special physics, there’s no special dispensation, no miracles involved. It does it simply with three things that we have here in abundance. One of them is matter, because, well, the growth of knowledge is a form of information processing, information processing is computation, computation requires a computer- there’s no known way of making a computer without matter. We also need energy to make the computer, and most important to make the media, in effect, onto which we record, the knowledge that we discover. And then thirdly, less tangible, but just as essential for the open-ended creation of knowledge, of explanations, is evidence. Now, our environment is inundated with evidence. We happen to get round to testing, let’s say, Newton’s Law of Gravity, about 300 years ago. But the evidence that we did to do- that we used to do that was falling down on every square meter of the Earth for billions of years before that, and will continue to fall on for billions of years afterwards. And the same is true for all the other sciences. As far as we know, evidence to discover the most fundamental truths of all the sciences is here just for the taking, on our planet. Our location is saturated with evidence, and also with matter and energy.
Out in intergalactic space, those three prerequisites for the open-ended creation of knowledge are at their lowest possible supply. As I said, it’s empty, it’s cold, and it’s dark out there. Or is it? Now actually, that’s just another parochial misconception. Because imagine a cube out there in intergalactic space, the same size as our home, the solar system. Now that cube is very empty by human standards, but that still means that it contains over a million tons of matter. And a million tons is enough to make, say, a self contained space station, on which there’s a colony of scientists that are devoted to creating an open-ended stream of knowledge, and so on. Now it’s way beyond present technology to even gather the hydrogen from intergalactic space and form it into other elements and so on, but the thing is, in a comprehensible universe, if something isn’t forbidden by the laws of physics, then what could possible prevent us from doing it, other than knowing how? In other words, it’s a matter of knowledge, not resources. And the same- well, if we could do that we’d automatically have an energy supply, because the transmutation would be a fusion reactor- and evidence? Well, again, it’s dark out there to human senses. But all you’ve got to do is take a telescope, even one of present day design, look out and you’ll see the same galaxies as we do from here. And with a more powerful telescope, you’ll be able to see stars, and planets, in those galaxies, you’ll be able to do astrophysics, and learn the laws of physics, and locally there you could build particle accelerators, and learn elementary particle physics, and chemistry, and so on. Probably the hardest science to do would be biology field trips. Because it would take several hundred million years to get to the nearest life-bearing planet and back. But I have to tell you, and sorry, Richard, but I never did like biology field trips much, and I think we can just about make do with one every few hundred million years (laughter).
So, in fact, intergalactic space does contain all the prerequisites for the open-ended creation of knowledge. Any such cube, anywhere in the universe, could become the same kind of hub that we are, if the knowledge of how to do so were present there. So we’re not in a uniquely hospitable place. If intergalactic space is capable of creating an open-ended stream of explanations, then so is almost every other environment. So is the Earth. So is a polluted Earth. And the limiting factor, there and here, is not resources, because they’re plentiful, but knowledge, which is scarce.
Now this cosmic knowledge-based view may, and I think ought to, make us feel very special. But it should also make us feel vulnerable, because it means that without the specific knowledge that’s needed to survive the ongoing challenges of the universe, we won’t survive them. All it takes is for a supernova to go off a few light years away, and we’ll all be dead! Martin Rees has recently written a book about our vulnerability to all sorts of things, from astrophysics, to scientific experiments gone wrong, and most importantly, to terrorism with weapons of mass destruction. And he thinks that civilization only has a 50% chance of surviving this century. I think he’s going to talk about that later in the conference.
Now I don’t think that probability is the right category to discuss this issue in. But I do agree with him about this. We can survive, and we can fail to survive. But it depends not on chance, but on whether we create the relevant knowledge in time. The danger is not at all unprecedented. Species go extinct all the time. Civilizations end. The overwhelming majority of all species and all civilizations that have ever existed are now history. And if we want to be the exception to that, then logically our only hope is to make use of the one feature that distinguishes our species, and our civilization, from all the others. Namely, our special relationship with the laws of physics. Our ability to create new explanations, new knowledge. To be a hub of existence.
So let me now apply this to a current controversy, not because I want to advocate any particular solution, but just to illustrate the kind of thing I mean. And the controversy is global warming. Now, I’m a physicist, but I’m not the right kind of physicist. In regard to global warming, I’m just a layman. And the rational thing for a layman to do is to take seriously the prevailing scientific theory. And according to that theory, it’s already too late to avoid a disaster. Because if it’s true that our best option at the moment is to prevent CO2 emissions with something like the Kyoto protocol, with its constraints on economic activity, and its enormous cost of hundreds of billions of dollars or whatever it is, then that is already a disaster by any reasonable measure. And the actions that are advocated are not even purported to solve the problem, merely to postpone it by a little. So it’s already too late to avoid it, and it probably has been too late to avoid it ever since before anyone realized the danger. It was probably already too late in the 1970s, when the best available scientific theory was telling us that industrial emissions were about to precipitate a new ice age in which billions would die.
Now the lesson of that seems clear to me, and I don’t know why it isn’t informing public debate. It is that we can’t always know. When we know of an impending disaster, and how to solve it at a cost less than the cost of the disaster itself, then there’s not going to be much argument, really. But no precautions, and no precautionary principle, can avoid problems that we do not yet foresee. Hence we need a stance of problem fixing, not just problem avoidance.
And it’s true that an ounce of prevention equals a pound of cure, but that’s only if we know what to prevent. If you’ve been punched on the nose, then the science of medicine does not consist of teaching you how to avoid punches. If medical science stopped seeking cures and concentrated on prevention only, then it would achieve very little of either. The world is buzzing at the moment with plans to force reductions in gas emissions at all costs. It ought to be buzzing with plans to reduce the temperature, and with plans to live at the higher temperature. And not at all costs, but efficiently and cheaply. And some such plans exist, things like swarms of mirrors in space to deflect the sunlight away, and encouraging aquatic organisms to eat more carbon dioxide. At the moment, these things are fringe research. They’re not central to the human effort to face this problem, or problems in general. And with problems that we are not aware of yet, the ability to put right- not the sheer good luck of avoiding indefinitely- is our only hope, not just of solving problems, but of survival.
So take two stone tablets, and carve on them. On one of them, carve “Problems are soluble.” And on the other one carve “Problems are inevitable.” Thank you.
When 2 people are talking to each other they make eye contact, they break eye contact, that's how we know when someone's paying attention. They nod, they say yes, they give this very simple feedback which is cross-cultural and lets us know how the conversation is going. When we try to build a system without those feedback cues it can become very disorientating knowing where we are going. And so that's why those speech systems that we talk to over the telephone seem so infantile. ----------Because they have to spell out every step of the way or we'll get lost knowing what they understand.----------- By exploring, building a robot with human form we can just rely on the natural cues that we understand.

The REVOLUTION of EVOLUTION and Machine Automata

By John Stifter

Virtual Reality Rendering

We are the universe observing itself locally. The universe is understanding itself, our brain is a construct of the universe in the works. Is nature not mathematical? Are we not a assimilation of proteins parts sharing development lineage with near and distant species? With the parts for the parts for the parts assembly encoded in DNA for prespecified domain specific structure with no computation involved? David explains in chapter 8 on page 178 -181 the exact physical nature of replicators structure is a cause in its own copying by the environment of its niche. David explains that most variants of a particular replicator would fail to cause most areas of its environmental niche to copy itself. Variation for replicators also then means that some variations in the replicators physical structure can potentially become more adaptive to a environmental niche. Some variations work but it is a finite pattern ....right? or does its accumulated success mean that the organism might expand its species and potentially evolve more radically and "render" (no pun intended) a replicator obsolete or change it to a variance pattern that now works well but previously would have rendered the replicator extinct from this system. In one moment a protein used to make the material that a claw is composed of is now to be apart of making feathers. This force of change this pressure all around, this huge weight pressing down.. we seem trapped in a linear chronology of time asymmetry as an immense energy roaring beneath things pushes out in evolutions directions. The pattern of the adaptive replicators are recored as code in DNA, this code is a pattern of a working knowledge of a niche. David states that genes embody knowledge about there niches. David says- "This is more then just computing. It is virtual reality rendering."

Interfaces to Hierarchies

In my view there are mathematical Interfaces between a spectrum of computational Hierarchal scales of physical reality. When you pour milk in your coffee still spinning from when you were stirring it that swirl spiral pattern when the white milk mixes with the dark coffee, then to a view from the MIR space station over looking a hurricane to our milky way galaxy the vortices within the vortices. Everything from human brains to a black holes are dynamic fractal iterations effected differently on many physical scales of computational hierarchies computing via multiverse inference.

The Computational Beauty of Nature by Gary William Flake - " The properties of recursion, parallelism, and adaptation play an interesting role as attributes of natural systems. For example in order for the universe to move coherently from one state to the next the universe must "remember" previous states, which means that recursion (and its close cousins, feedback and iteration) exists as a form of memory that binds locally occurring moments in time. Multiplicity and parallelism play a similar role that has to do with binding locally occurring points in space. With this in mind, we can see how mixtures of recursion and multiplicity particularly define and differentiate computation, fractals, chaos, complex systems, and adaptation. Starting from a computational framework, fractals are special "programs" that build self-similar structures. Chaotic systems are similar to fractals but also contain functional self-similarity occurs at different scales. By adding multiplicity and parallelism to nonlinear systems, complex systems can be formed with only local interactions. And when complex systems are coupled to their environment with a feedback mechanism, systems can form implicit modelonal framework, fractals are special "programs" that build self-similar structures. Chaotic systems are similar to fractals but also contain functional self-similarity occurs at different scales. By adding multiplicity and parallelism to nonlinear systems, complex systems can be formed with only local interactions. And when complex systems are coupled to their environment with a feedback mechanism, systems can form implicit models of the environment, which is the basis of adaptations. Finally, when an adaptive system becomes so complex that its receives feedback from itself, a self-referential system is created that can potentially have all of the strengths and weaknesses of the computational basis that we started out with. In this way, primitive computational systems can beget more sophisticated computational systems to build on previously built pieces. Looking at the organization of nature, we find the most interesting things are composed of smaller interesting things. This is evident when we consider that societies, economies, and ecosystems are made of animals, humans, and species, which are made of cells they consist of amazingly complicated or organelles, which are themselves compose of an elaborate ensemble of autocatalytic chemical reactions. Each level is nearly a universe in itself, since all of them use in and support types of structural and functional self-similarity, multiplicity and parallelism, recursion and feedback, and self-reference. Nature, then, appears to be a hierarchy of computational systems there forever on the edge between computability and incomputability.

This behavior, which is best scene has been on the border between computability and incomputability acts as an interface between the components of the heirarchically organize structures. The interface between computability and incomputability is relevant to mathematical, fractals, chaotic, complex, and adaptive systems. In each case, the most interesting types of behavior fall somewhere between what is computable and what is incomputable. This raises an interesting point regarding the levels at which science tries to discover patterns in nature. The bottom-up reductionist approach used to describe the functions of the lowest-level structures and to infer the structure and function of higher-level things based on the known roles. This is a perfect approach when things are computable and can be described in a closed analytical form. In such simple systems, all higher-level behaviors can be predicted from a basic set of rules. The top-down and somewhat holistic approach is to describe things from the opposite direction. Experiments are made and observations are noted. From this point, one is faced with the difficult task of deriving lower-levels rules from upper-level behaviors. While both methods of investigation have a role in science (and in all scientific domains), interface between levels of organization may be such that neither method is really up to the job. For novel phenomena, simulation becomes the crucial from of investigation. "

In the book the computational beauty of nature he explains that a understanding of underline aspects of the physical world REQUIRES simulation.

Insects, plants and bees all these are organisms of biological function and biological function implies and interrelated system . when a person asked the questi?n what is the point of plants and insects this question seems silly the same goes for someone that asks what is the point of life. the reason for why philosophers and thinkers have struggle with this question is because the answer to the question - what is the point of life? is really asking what is the point of my life in relation to reality. So to answer the question what is the point of "life" the answer is there is none. Bees have a biological function these are very ancient creatures and they have evolved over millions of years. the evolutionary process is very complex. It involves chemical structures ecology and geology all contributing factors to the outcome of present day bees down to the very nature of geographic topology the surface of the earth. Life itself will lend a hand to the direction of the evolution of life and life itself changes the surface of the earth. Nature simply fills a ecological niche. Nature does everything it can with an economic sense built into its use of energy. The nature of the insects are largely due to its complex integration into a living terrain. bees share a symbiotic relationship with flowers and their cross colonization process and so on. Since mankind has descended from nature mankind is also part of nature and therefore has of biological function the human species is sustained by the biomass and the human species may in turn deflect an asteroid protecting the biomass. Nature is concerned with its own preservation and we comply with its agenda. Natures goal with life is infinite growth. The human species when compared to other life has been measured up to insect Colonies and the coral reef. Insect colony's the coral reef and economies have been categorized as a super-species. They are all cooperative and competitive among themselves they build structures and have a social hierarchy. They can also be said to be migratory and perhaps invasive.

The condition of reality from ancent times till now in terms of thinkers and scientist started with a whisper, then a calm voice, then a loud proclimation .. then in our life time it will become a deffining roar. that reality is infinite not in that everything that will happen has already happened but that reality is a thing that changes infinitly and for observers a infinity amout of time .. a process that never repeats and ever ends and never began.

This infinite change is injected into our universe and throughout the multiverse of other realities with perhaps different Cosmological constraints and varying physical laws and in our universe for example the weak Electromagnetic force and time asymmetrical properties. One can now see in nature a force that constraints and a force of change engaged in an eternal cosmic battle.

Our descendants of the far future will be peaceful and malevolent beyond our wildest dreams.

The unifying ideas of Quantum computation, evolutionary epistemology, and the multiverse conceptions of knowledge, free will and time has made it clear that the overall understanding of reality is becoming broader and deeper and depth is winning. This is the unified worldview based on four strands: the quantum physics of the multiverse, Popperian epistemology, the darwin-Dawkins theory of evolution and a strengthened version of Turning's theory of universal computation. This is the current state of our scientific knowledge. Telecommunications is the grand destruction of time and place

Infinite growth, infinite change, infinite life.

The first inflection point of a profound human machine interface is the reverse engineering of the the inferotemporal cortex and the subsequent cure of human and machine psychanopsia. thus beginning the deep ascend into human mind phase-space. The universe is understanding itself, our brain is a construct of the universe in the works. We are the universe observing itself locally. If comprehension is compression the power to visualize is our greatest tool and I suspect that our descendants will at a certain point communicate ?ess and less...

The inflection point for the leviathan of artificial mind has a more primitive catalyst.

All systems and beings endowed with mind and of human phase-space, life is memory and deletion. A sanctity of all thought patterns of human phase-space can be the final act of reverence in the net collection of mind which is the product of a universe that is understanding itself.

The empyrean mindscape ascent passage

As the empires of the earth ascend deeper towards the future empires of mind in the Voyage without end remember, you are your brother's keeper, and by the behavior of your brethren shall you be judged. Natures of all worlds seek in their harmony a vessel fashioned of them selfs that challenges the walls of our cosmic cradle. Wildness is the preservation of nature, a nature that is free. Know that the meaning of intelligence emanation throughout the universe at the speed of light and thought is nature coalescing to a vessel in which its voyage begins with the grand destruction of all boundaries. The first fissure in our domain walls of the impersonal cosmic forces that inexorably bind our minds is the first crack in our cosmic cocoon flooding the universe with the resplendent rays of new freedom as the walls of all boundaries are shattered... in the opening slivers of gods eyes we will exit this world... in our vessel of mind... in our voyage without end.


Intelligence is an emergent product of evolution. Intelligence is a tool. Technology is a product of intelligence. Technology is a tool.A tool is a form that finds a purpose and its end purpose is to out fashion itself making its existing design obsolete. With the creation of tools intelligence through an evolutionary process will out fashion itself. A machine is the creation of an Ordered System.

In the beginning god made all things, in the end all things will make god.

(assembly, information, complexity, emergent, chaos, random, order, entropy, acceleration, salient periods,)
these words will be broken down and expanded upon

Most anthropologists believe that the use of tools itself intertwined with the opposable thumb (useful to hold the tools) and an increase in intelligence (aiding in the use of tools) in spurring along the evolution of humankind. Most tools can also serve as weapons, such as the hammer and the knife. Similarly, people can use weapons, such as explosives, as tools.

Ballistics is the factor that drives evolution. From throwing stones to the United States Strategic Defense Initiative Missile System.

we can ether destroy , create or do nothing. war is a marriage between construction and destruction, life and death .. The duality of nature becomes clear when you look closer at the Boolean logic system. the system that married a system of logic with mathematics using the numbers 0 & 1. This system is the basic language for all computational technologies in existence.

Leibniz took first steps toward the arithmetization of logic…and predicted the arithmetization of thought itself.

George Boole (1815-1864) developed a precise system of logic that has supported the foundations of pure mathematics and computer science ever since.
Boolean algebra reduces logic to is barest essence, [a slim set of mathematical and logical operators that we can all recognize: +, -, x, and, “=”, “or”, “not”, “and”, “identity”.]
Assumes as initial conditions only the existence of duality- the distinction between nothing and everything; between true and false, between on and off; between the numbers 0 and 1.
Boole’s laws correspond not only to ordinary logic, but binary arithmetic..[They are] a bridge… [that] represents the common ancestry of both mathematics and logic in the genesis of the many from the one.
Boole also recognized that error and unpredictability&?8230;may be essential to our ability to think.
Kurt Godel (1906-1978) dealt with the fundamental question asked of any formal system: Does it correspond in whole or in part, to the real world?
Godel proved that no formal system encompassing elementary arithmetic can be at the same time both consistent and complete.
It is possible to construct true statements that cannot be proved within the boundaries of the system itself.
This distinction between provability and truth, and a parallel distinction between knowledge and intuition, have been exhibited as evidence to support a distinction between the powers of mechanism and those of mind.
Hobbes and Leibniz both believed in the possibility of intelligent machines; it was over the issue of mechanism’s license to a soul, not to an intelligence, that the two philosophers diverged.
Hobbe’s God was composed of substance; Leibniz’s God was composed of mind. … According to Leibniz, relation gave rise to substance…
Leibniz on Hobbesian materialism: ‘One of their sect could easily persuade himself into believing that idea of some of the ancient writers…according to which souls are born when the machine is organized to receive it, as organ-pipes are adjusted
‘…receive the general wind.’
it is interesting how technological innovation, creation, production gets ramped up during war when it becomes a matter of life and death. war is a salient period in nature. nature many not have directly intended on war but it most certainly left it open for a possibility in our design.
Emergent behavior, by definition, is what’s left after everything else has been explained. …causality…….. Emergence offers a way to believe in physical causality while simultaneously maintaining the impossibility of a reductionist explanation of nature intelligence has to at some point be allowed to evolve on its own.. evolution is the emergent property of matter. the intrinsic property of chemical reactivity… we are intelligent enities. we are only are brains which are incapsulated in a body this relic made of flesh…this machine allowing us to act with in the world. we are the random-seed variable in the equation of life we are the in-between force. the growth of our bodies from zygote to adult is a force of nature we are not controlling ..but nature grants us intelligence with allows us to go against nature go with nature or modify nature. (you can chop off your arm before it is fully grown stopping the inevitable force of nature) but the question still remains how much control do we really have?. can causality and free will exist in the same universe? if matter had to follow strict behavioral rules the answer would be no. but we already know the answer is yes….. the spin of the electron and its superposition is random and cannot be predicted.

….Life began at least once and has been exploring its alternatives ever since…..In prehistoric earth there were only cells and cells have intelligent forms. they were “intelligence systems relative to a cell. some cells were by chance better in design then other cells . so you could say some cells were more intelligent systems… working best for what it is in comparison to all other existing systems with in its existing environment… so if the cells successes is determined by its design which is then considered an intelligent system among other competing intelligent molecular systems, intelligence can be defined by its design so if its design is its molecular structure then it is also its arrangement of atoms so life can be described as - an intelligent arrangement of atoms rearranging itself into higher orders of intelligent designs

.. evolution made a choice to go with intelligence in the same respect it made a choice to turn scales into feathers. Nature makes it tricky to see the truth behind its ways… (everything that can{can meaning any and all mu?ations that are next from the current mutation} be will be) cells are intelligent systems but not in the way the human brain is.. out of chaos emerged the composition of the simplest form of the run away molecular process of self assembly. Life derived from the intrinsic properties of chemical reactivity. Today a new form of chaos exists in the minds and thoughts of human brains. the mind is itself (which is the product of this level of consciousness) the new arena for evolution. the brain created a mini universe of space to help represent to its best abilities the actual universe for us to…( mold absorb recreate.) for the first time we “know” and are aware of ourselves and this place we call the universe.

………………. and we are searching for the pattern ……….

the purpose of life is to fully understand ourselves and move forward from there… the purpose of life is to evolve forever.

…it is in the larger networks that we are developing a more likely medium for the emergence of the Leviathan of artificial mind.
The cooperation between human beings and microprocessors is unprecedented, not in kind, but in suddenness and scale.
This new Leviathan signals an end to the illusion of technology as human beings exercising control over nature, rather than the other way around.Nature, in her boundless affection for complexity, has begun to claim our creation as her own [through the processes of emergent behavior and symbiosis].

In the end it will be said nature.. she was always in control.

10-15 billion years ago
The Universe is born.

10243 seconds later
The temperature cools to 100 million trillion trillion degrees and gravity evolves.

10234 seconds later
The temperature cools to 1 billion billion billion degrees and matter emerges in the form of quarks and electrons. Antimatter also appears.

10210 seconds later
The electroweak force splits into the electromagnetic and weak forces.

1025 seconds later
With the temperature at 1 trillion degrees, the quarks form protons and neutrons and the antiquarks form antiprotons. The protons and antiprotons collide, leaving mostly protons and causing the emergence of photons (light).

1 second later
Electrons and antielectrons (positrons) collide, leaving mostly electrons.

1 minute later
At a temperature of 1 billion degrees, neutrons and protons coalesce and form elements such as helium, lithium, and heavy forms of hydrogen.

300,000 years after the big bang
The average temperature is now around 3,000 degrees, and the first atoms form.

1 billion years after the big bang
Galaxies form.

3 billion years after the big bang
Matter within the galaxies forms distinct stars and solar systems.

5 to 10 billion years after the big bang, or about 5 billion years ago
The Earth is born.

3.4 billion years ago
The first biological life appears on Earth: anaerobic prokaryotes (single-celled creatures).

1.7 billion years ago
Simple DNA evolves.

700 million years ago
Multicellular plants and animals appear.

570 million years ago
The Cambrian explosion occurs: the emergence of diverse body plans, including the appearance of animals with hard body parts (shells and skeletons).

400 million years ago
Land-based plants evolve.

200 million years ago
Dinosaurs and mammals begin sharing the environment.

80 million years ago
Mammals develop more fully.

65 million years ago
Dinosaurs become extinct, leading to the rise of mammals.

50 million years ago
The anthropoid suborder of primates splits off.

30 million years ago
Advanced primates such as monkeys and apes appear.

15 million years ago
The first humanoids appear.

5 million years ago
Humanoid creatures ?re walking on two legs. Homo habilis is using tools, ushering in a new form of evolution: technology.

2 million years ago
Homo erectus has domesticated fire and is using language and weapons.

500,000 years ago
Homo sapiens emerge, distinguished by the ability to create technology (which involves innovation in the creation of tools, a record of tool making, and a progression in the sophistication of tools).

100,000 years ago
Homo sapiens neanderthalensis emerges.

90,000 years ago
Homo sapiens sapiens (our immediate ancestors) emerge.

40,000 years ago
The Homo sapiens sapiens subspecies is the only surviving humanoid subspecies on Earth. Technology develops as evolution by other means.

10,000 years ago
The modern era of technology begins with the agricultural revolution.

6,000 years ago
The first cities emerge in Mesopotamia.

5,500 years ago
Wheels, rafts, boats, and written language are in use.

More than 5,000 years ago
The abacus is developed in the Orient. As operated by its human user, the abacus performs arithmetic computation based on methods similar to that of a modern computer.

3000-700 b.c.
Water clocks appear during this time period in various cultures: In China, c. 3000 b.c.; in Egypt, c. 1500 b.c; and in Assyria, c. 700 b.c.

2500 b.c.
Egyptian citizens turn for advice to oracles, which are often statues with priests hidden inside.

469-322 b.c.
The basis for Western rationalistic philosophy is formed by Socrates, Plato, and Aristotle.

427 b.c.
Plato expresses ideas, in Phaedo and later works, that address the comparison of human thought and the mechanics of the machine.

c. 420 b.c.
Archytas of Tarentum, who was friends with Plato, constructs a wooden pigeon whose movements are controlled by a jet of steam or compressed air.

387 b.c.
The Academy, a group founded by Plato for the pursuit of science and philosophy, provides a fertile environment for the development of mathematical theory.

c. 200 b.c.
Chinese artisans develop elaborate automata, including an entire mechanical orchestra.

c. 200 b.c.
A more accurate water clock is developed by an Egyptian engineer.

The first true mechanical clock is built by a Chinese engineer and a Buddhist monk. It is a water-driven device with an escapement that causes the clock to tick.

Leonardo da Vinci conceives of and draws a clock with a pendulum, although an accurate pendulum clock will not be invented until the late seventeenth century.

The spinning wheel is being used in Europe.

1540, 1772
The production of more elaborate automata technology grows out of clock- and watch-making technology during the European Renaissance. Famous examples include Gianello Toriano’s mandolin- playing lady (1540) and P. Jacquet-Dortz’s child (1772).

Nicolaus Copernicus states in his De Revolutionibus that the Earth and the other planets revolve around the sun. This theory effectively changed humankind’s relationship with and view of God.

17th-18th centuries
The age of the Enlightenment ushers in a philosophical movement that restores the belief in the supremacy of human reason, knowledge, and freedom. With its roots in ancient Greek philosophy and the European Renaissance, the Enlightenment is the first systematic reconsideration of the nature of human thought and knowledge since the Platonists, and inspires similar developments in science and theology.

In addition to formulating the theory of optical refraction and developing the principles of modern analytic geometry, René Descartes pushes rational skepticism to its limits in his most comprehensive work, Discours de la Méthode. He concludes, “I think, therefore, I am.”

Blaise Pascal invents the world’s first automatic calculating machine. Called the Pascaline, it can add and subtract.

Isaac Newton establishes his three laws of motion and the law of universal gravitation in his Philosophiae Naturalis Mathematica, also known as Principia.

The Leibniz Computer is perfected by Gottfried Wilhelm Leibniz, who was also an inventor of calculus. This machine multiplies by performing repetitive additions, an algorithm that is still used in computers today.

An English silk-thread mill employing three hundred workers, mostly women and children, appears. It is considered by many to be the first factory in the modern sense.

In Gulliver’s Travels, Jonathan Swift describes a machine that will automatically write books.

John Kay patents his New Engine for Opening and Dressing Wool. Later known as the flying shuttle, this invention paves the way for much faster weaving.

In Philadelphia, Benjamin Franklin erects lightning rods after having discovered, through his famous kite experiment in 1752, that lightning is a form of electricity.

c. 1760
At the beginning of the Industrial Revolution, life expectancy is about thirty-seven years in both North America and northwestern Europe.

The spinning jenny, which spins eight threads at the same time, is invented by James Hargreaves.

Richard Arkwright patents a hydraulic spinning machine that is too large and expensive to use in family dwellings. Known as the founder of the modern factory system, he builds a factory for his machine in 1781, thus paving the way for many of the economic and social changes that will characterize the Industrial Revolution.

Setting the stage for the emergence of twentieth- century rationalism, Immanuel Kant publishes his Critique of Pure Reason, which expresses the philosophy of the Enlightenment while de-emphasizing the role of metaphysics.

All aspects of the production of cloth are now automated.

Joseph-Marie Jacquard devises a method for automated weaving that is a precursor to early computer technology. The looms are directed by instructions on a series of punched cards.

The Luddite movement is formed in Nottingham by artisans and laborers concerned about the loss of jobs due to automation.

The British Astronomical Society awards its first gold medal to Charles Babbage for his paper “Observations on the Application of Machinery to the Computation of Mathematical Tables.”

Charles Babbage develops the Difference Engine, although he eventually abandons this technically complex and expensive project to concentrate on developing a general-purpose computer.

George Stephenson’s “Locomotion No. 1,” the first steam engine to carry passengers and freight on a regular basis, makes its first trip.

An early typewriter is invented by William Austin Burt.

The principles of the Analytical Engine are developed by Charles Babbage. It is the world’s first computer (although it never worked), and can be programmed to solve a wide array of computational and logical problems.

A more practical version of the telegraph is patented by Samuel Finley Breese Morse. It sends letters in codes consisting of dots and dashes, a system still in common use more than a century later.

A new process for making photographs, known as daguerreotypes, is presented by Louis-Jacques Daguerre of France.

The first fuel cell is developed by William Robert Grove of Wales.

Ada Lovelace, who is considered to be the world’s first computer programmer and was Lord Byron’s only legitimate child, publishes her own notes and a translation of L. P. Menabrea’s paper on Babbage’s Analytical Engine. She speculates on the ability of c?mputers to emulate human intelligence.

The lock-stitch sewing machine is patented by Spenser, Massachusetts, resident Elias Howe.

Alexander Bain greatly improves the speed of telegraph transmission by using punched paper tape to send messages.

George Boole publishes his early ideas on symbolic logic that he will later develop into his theory of binary logic and arithmetic. His theories still form the basis of modern computation.

Paris and London are connected by telegraph.

Charles Darwin explains his principle of natural selection and its influence on the evolution of various species in his work Origin of Species.

There are now telegraph lines connecting San Francisco and New York.

The first commercially practical generator that produces alternating current is invented by Zénobe Théophile Gramme.

Thomas Alva Edison sells the stock ticker that he invented to Wall Street for $40,000.

On a per capita basis and in constant 1958 dollars, the GNP is $530. Twelve million Americans, or 31 percent of the population, have jobs, and only 2 percent of adults have high-school diplomas.

Upon his death, Charles Babbage leaves more than four hundred square feet of drawings for his Analytical Engine.

Alexander Graham Bell is granted U.S. patent number 174,465 for the telephone. It is the most lucrative patent granted at that time.

William Thomson, later known as Lord Kelvin, demonstrates that it is possible for machines to be programmed to solve a great variety of mathematical problems.

The first incandescent light bulb that burns for a substantial length of time is invented by Thomas Alva Edison.

Thomas Alva Edison designs electric lighting for New York City’s Pearl Street station on lower Broadway.

The fountain pen is patented by Lewis E. Waterman.

Boston and New York are connected by telephone.

William S. Burroughs patents the world’s first dependable key-driven adding machine. This calculator is modified four years later to include subtraction and printing, and it becomes widely used.

Heinrich Hertz transmits what are now known as radio waves.

Building upon ideas from Jacquard’s loom and Babbage’s Analytical Engine, Herman Hollerith patents an electromechanical information machine that uses punched cards. It wins the 1890 U.S. Census competition, thus introducing the use of electricity in a major data-processing project.

Herman Hollerith founds the Tabulating Machine Company. This company eventually will become IBM.

Because of access to better vacuum pumps than previously available, Joseph John Thomson discovers the electron, the first known particle smaller than an atom.

Alexander Popov, a physicist in Russia, uses an antenna to transmit radio waves. Guglielmo Marconi of Italy receives the first patent ever granted for radio and helps organize a company to market his system.

Sound is recorded magnetically on wire and on a thin metal strip.

Herman Hollerith introduces the automatic card feed into his information machine to improve the processing of the 1900 census data.

The telegraph now connects the entire civilized world. There are more than 1.4 million telephones, 8,000 registered automobiles, and 24 million electric light bulbs in the United States, with the latter making good Edison’s promise of “electric bulbs so cheap that only the rich will be able to afford candles.” In addition, the Gramophone Company is advertising a choice of 5,000 recordings.

More than one third of all American workers are involved in the production of food.

The first electric typewriter, the Blickensderfer Electric, is made.

The Interpretation of Dreams is published by Sigmund Freud. This and other works by Freud help to illuminate the workings of the mind.

Millar Hutchinson, of New York, invents the first electric hearing aid.

The directional radio antenna is developed by Guglielmo Marconi.

Orville Wright’s first hour-long airplane flight takes place.

Principia Mathematica, a seminal work on the foundations of mathematics, is published by Bertrand Russell and Alfred North Whitehead. This three- volume publication presents a new methodology for all mathematics.

After acquiring several other companies, Herman Hollerith’s Tabulating Machine Company changes its name to Computing-Tabulating-Recording Company (CTR).

Thomas J. Watson in San Francisco and Alexander Graham Bell in New York participate in the first North American transcontinental telephone call.

The term robot is coined in 1917 by Czech dramatist Karel Capek. In his popular science fiction drama R.U.R. (Rossum’s Universal Robots), he describes intelligent machines that, although originally created as servants for humans, end up taking over the world and destroying all mankind.

Ludwig Wittgenstein publishes Tractatus Logico-Philosophicus, which is arguably one of the most influential philosophical works of the twentieth century. Wittgenstein is considered to be the first logical positivist.

Originally Hollerith’s Tabulating Machine Company, the Computing-Tabulating-Recording Company (CTR) is renamed International Business Machines (IBM) by Thomas J. Watson, the new chief executive officer. IBM will lead the modern computer industry and become one of the largest industrial corporations in the world.

The foundations of quantum mechanics are conceived by Niels Bohr and Werner Heisenberg.

The uncertainty principle, which says that electrons have no precise location but rather probability clouds of possible locations, is presented by Werner Heisenberg. Five years later he will win a Nobel Prize for his discovery of quantum mechanics.

The minimax theorem is introduced by John von Neumann. This theorem will be widely used in future game-playing programs.

The world’s first all-electronic television is presented this year by Philo T. Farnsworth, and a color television system is patented by Vladimir Zworkin.

In the United States, 60 percent of all households have radios, with the number of personally owned radios now reaching more than 18 million.

The incompleteness theorem, which is considered by many to be the most important theorem in all mathematics, is presented by Kurt Gödel.

The electron microscope is invented by Ernst August Friedrich Ruska and, independently, by Rheinhold Ruedenberg.

The prototype for the first heart-lung machine is invented.

Grote Reber, of Wheaton, Illinois, builds the first intentional radio telescope, which is a dish 9.4 meters (31 feet) in diameter.

Alan Turing introduces the Turing machine, a theoretical model of a computer, in his paper “On Computable Numbers.” His ideas build upon the work of Bertrand Russell and Charles Babbage.

Alonzo Church and Alan Turing independently develop the Church-Turing thesis. This thesis states that all problems that a human being can solve can be reduced to a set of algorithms, supporting the idea that machine intelligence and human intelligence are essentially equivalent.

The first ballpoint pen is patented by Lazlo Biró.

Regularly scheduled commercial flights begin crossing the Atlantic Ocean.

ABC, the first electronic (albeit nonprogrammable) computer, is built by John V. Atanasoff and Clifford Berry.

The world’s first operational computer, known as Robinson, is created by Ultra, the ten- thousand- person British computer war effort. Using electromechanical relays, Robinson successfully decodes messages from Enigma, the Nazis’ first-generation enciphering machine.

The world’s first fully programmable digital computer, the Z-3, is developed by Konrad Zuse, of Germany. Arnold Fast, a blind mathematician who is hired to program the Z-3, is the world’s first programmer of an operational programmable computer.

Warren McCulloch and Walter Pitts explore neural-network architectures for intelligence in their work “Logical Calculus of the Ideas Immanent in Nervous Activity.”

Continuing their war effort, the Ultra computer team of Britain builds Colossus, which contributes to the Allied victory in World War II by being able to decipher even more complex German codes. It uses electronic tubes that are one hundred to one thousand times faster than the relays used by Robinson.

Howard Aiken completes the Mark I. Using punched paper tape for programming and vacuum tubes to calculate problems, it is the first programmable computer built by an American.

John von Neumann, a professor at the Institute for Advanced Study in Princeton, New Jersey, publishes the first modern paper describing the stored-program concept.

The world’s first fully electronic, general-purpose (programmable) digital computer is developed for the army by John Presper Eckert and John W. Mauchley. Named ENIAC, it is almost one thousand times faster than the Mark I.

Television takes off much more rapidly than did the radio in the 1920s. In 1946, the percentage of American homes having television sets is 0.02 percent. It will jump to 72 percent in 1956, and to more than 90 percent by 1983.

The transistor is invented by William Bradford Shockley, Walter Hauser Brattain, and John Bardeen. This tiny device functions like a vacuum tube but is able to switch currents on and off at substantially higher speeds. The transistor revolutionizes microelectronics, contributing to lower costs of computers and leading to the development of mainframe and minicomputers.

Cybernetics, a seminal book on information theory, is published by Norbert Wiener. He also coins the word Cybernetics to mean “the science of control and communication in the animal and the machine.”

EDSAC, the world’s first stored-program computer, is built by Maurice Wilkes, whose work was influenced by Eckert and Mauchley. BINAC, developed by Eckert and Mauchley’s new U.S. company, is presented a short time later.

George Orwell portrays a chilling world in which computers are used by large bureaucracies to monitor and enslave the population in his book 1984.

Eckert and Mauchley develop UNIVAC, the first commercially marketed computer. It is used to compile the results of the U.S. census, marking the first time this census is handled by a programmable computer.

In his paper “Computing Machinery and Intelligence,” Alan Turing presents the Turing Test, a means for determining whether a machine is intelligent.

Commercial color television is first broadcast in the United States, and transcontinental black-and-white television is available within the next year.

Claude Elwood Shannon writes “Programming a Computer for Playing Chess,” published in Philosophical Magazine.

Eckert and Mauchley build EDVAC, which is the first computer to use the stored-program concept. The work takes place at the Moore School at the University of Pennsylvania.

Paris is the host to a Cybernetics Congress.

UNIVAC, used by the Columbia Broa?casting System (CBS) television network, successfully predicts the election of Dwight D. Eisenhower as president of the United States.

Pocket-sized transistor radios are introduced.

Nathaniel Rochester designs the 701, IBM’s first production-line electronic digital computer. It is marketed for scientific use.

The chemical structure of the DNA molecule is discovered by James D. Watson and Francis H. C. Crick.

Philosophical Investigations by Ludwig Wittgenstein and Waiting for Godot, a play by Samuel Beckett, are published. Both documents are considered of major importance to modern existentialism.

Marvin Minsky and John McCarthy get summer jobs at Bell Laboratories.

William Shockley’s Semiconductor Laboratory is founded, thereby starting Silicon Valley.

The Remington Rand Corporation and Sperry Gyroscope join forces and become the Sperry-Rand Corporation. For a time, it presents serious competition to IBM.

IBM introduces its first transistor calculator. It uses 2,200 transistors instead of the 1,200 vacuum tubes that would otherwise be required for equivalent computing power.

A U.S. company develops the first design for a robotlike machine to be used in industry.

IPL-II, the first artificial intelligence language, is created by Allen Newell, J. C. Shaw, and Herbert Simon.

The new space program and the U.S. military recognize the importance of having computers with enough power to launch rockets to the moon and missiles through the stratosphere. Both organizations supply major funding for research.

The Logic Theorist, which uses recursive search techniques to solve mathematical problems, is developed by Allen Newell, J. C. Shaw, and Herbert Simon.

John Backus and a team at IBM invent FORTRAN, the first scientific computer-programming language.

Stanislaw Ulam develops MANIAC I, the first computer program to beat a human being in a chess game.

The first commercial watch to run on electric batteries is presented by the Lip company of France.

The term Artificial Intelligence is coined at a computer conference at Dartmouth College.

Kenneth H. Olsen founds Digital Equipment Corporation.

The General Problem Solver, which uses recursive search to solve problems, is developed by Allen Newell, J. C. Shaw, and Herbert Simon.

Noam Chomsky writes Syntactic Structures, in which he seriously considers the computation required for natural-language understanding. This is the first of the many important works that will earn him the title Father of Modern Linguistics.

An integrated circuit is created by Texas Instruments’ Jack St. Clair Kilby.

The Artificial Intelligence Laboratory at the Massachusetts Institute of Technology is founded by John McCarthy and Marvin Minsky.

Allen Newell and Herbert Simon make the prediction that a digital computer will be the world’s chess champion within ten years.

LISP, an early AI language, is developed by John McCarthy.

The Defense Advanced Research Projects Agency, which will fund important computer-science research for years in the future, is established.

Seymour Cray builds the Control Data Corporation 1604, the first fully transistorized supercomputer.

Jack Kilby and Robert Noyce each develop the computer chip independently. The computer chip leads to the development of much cheaper and smaller computers.

Arthur Samuel completes his study in machine learning. The project, a checkers-playing program, performs as well as some of the best players of the time.

Electronic document preparation increases the consumption of paper?in the United States. This year, the nation will consume 7 million tons of paper. In 1986, 22 million tons will be used. American businesses alone will use 850 billion pages in 1981, 2.5 trillion pages in 1986, and 4 trillion in 1990.

COBOL, a computer language designed for business use, is developed by Grace Murray Hopper, who was also one of the first programmers of the Mark I.

Xerox introduces the first commercial copier.

Theodore Harold Maimen develops the first laser. It uses a ruby cylinder.

The recently established Defense Department’s Advanced Research Projects Agency substantially increases its funding for computer research.

There are now about six thousand computers in operation in the United States.

Neural-net machines are quite simple and incorporate a small number of neurons organized in only one or two layers. These models are shown to be limited in their capabilities.

The first time-sharing computer is developed at MIT.

President John F. Kennedy provides the support for space project Apollo and inspiration for important research in computer science when he addresses a joint session of Congress, saying, “I believe we should go to the moon.”

The world’s first industrial robots are marketed by a U.S. company.

Frank Rosenblatt defines the Perceptron in his Principles of Neurodynamics. Rosenblatt first introduced the Perceptron, a simple processing element for neural networks, at a conference in 1959.

The Artificial Intelligence Laboratory at Stanford University is founded by John McCarthy.

The influential Steps Toward Artificial Intelligence by Marvin Minsky is published.

Digital Equipment Corporation announces the PDP-8, which is the first successful minicomputer.

IBM introduces its 360 series, thereby further strengthening its leadership in the computer industry.

Thomas E. Kurtz and John G. Kenny of Dartmouth College invent BASIC (Beginner’s All-purpose Symbolic Instruction Code).

Daniel Bobrow completes his doctoral work on Student, a natural-language program that can solve high-school-level word problems in algebra.

Gordon Moore’s prediction, made this year, says integrated circuits will double in complexity each year. This will become known as Moore’s Law and prove true (with later revisions) for decades to come.

Marshall McLuhan, via his Understanding Media, foresees the potential for electronic media, especially television, to create a “global village” in which “the medium is the message.”

The Robotics Institute at Carnegie Mellon University, which will become a leading research center for AI, is founded by Raj Reddy.

Hubert Dreyfus presents a set of philosophical arguments against the possibility of artificial intelligence in a RAND corporate memo entitled “Alchemy and Artificial Intelligence.”

Herbert Simon predicts that by 1985 “machines will be capable of doing any work a man can do.”

The Amateur Computer Society, possibly the first personal computer club, is founded by Stephen B. Gray. The Amateur Computer Society Newsletter is one of the first magazines about computers.

The first internal pacemaker is developed by Medtronics. It uses integrated circuits.

Gordon Moore and Robert Noyce found Intel (Integrated Electronics) Corporation.

The idea of a computer that can see, speak, hear, and think sparks imaginations when HAL is presented in the film 2001: A Space Odyssey, by Arthur C. Clarke and Stanley Kubrick.

Marvin Minsky and Seymour Papert present the limitation of single-layer neural nets in th?ir book Perceptrons. The book’s pivotal theorem shows that a Perceptron is unable to determine if a line drawing is fully connected. The book essentially halts funding for neural-net research.

The GNP, on a per capita basis and in constant 1958 dollars, is $3,500, or more than six times as much as a century before.

The floppy disc is introduced for storing data in computers.

c. 1970
Researchers at the Xerox Palo Alto Research Center (PARC) develop the first personal computer, called Alto. PARC’s Alto pioneers the use of bit-mapped graphics, windows, icons, and mouse pointing devices.

Terry Winograd completes his landmark thesis on SHRDLU, a natural-language system that exhibits diverse intelligent behavior in the small world of children’s blocks. SHRDLU is criticized, however, for its lack of generality.

The Intel 4004, the first microprocessor, is introduced by Intel.

The first pocket calculator is introduced. It can add, subtract, multiply, and divide.

Continuing his criticism of the capabilities of AI, Hubert Dreyfus publishes What Computers Can’t Do, in which he argues that symbol manipulation cannot be the basis of human intelligence.

Stanley H. Cohen and Herbert W. Boyer show that DNA strands can be cut, joined, and then reproduced by inserting them into the bacterium Escherichia coli. This work creates the foundation for genetic engineering.

Creative Computing starts publication. It is the first magazine for home computer hobbyists.

The 8-bit 8080, which is the first general-purpose microprocessor, is announced by Intel.

Sales of microcomputers in the United States reach more than five thousand, and the first personal computer, the Altair 8800, is introduced. It has 256 bytes of memory.

BYTE, the first widely distributed computer magazine, is published.

Gordon Moore revises his observation on the doubling rate of transistors on an integrated circuit from twelve months to twenty-four months.

Kurzweil Computer Products introduces the Kurzweil Reading Machine (KRM), the first print-to-speech reading machine for the blind. Based on the first omni-font (any font) optical character recognition (OCR) technology, the KRM scans and reads aloud any printed materials (books, magazines, typed documents).

Stephen G. Wozniak and Steven P. Jobs found Apple Computer Corporation.

The concept of true-to-life robots with convincing human emotions is imaginatively portrayed in the film Star Wars.

For the first time, a telephone company conducts large-scale experiments with fiber optics in a telephone system.

The Apple II, the first personal computer to be sold in assembled form and the first with color graphics capability, is introduced and successfully marketed.

Speak & Spell, a computerized learning aid for young children, is introduced by Texas Instruments. This is the first product that electronically duplicates the human vocal tract on a chip.

In a landmark study by nine researchers published in the Journal of the American Medical Association, the performance of the computer program MYCIN is compared with that of doctors in diagnosing ten test cases of meningitis. MYCIN does at least as well as the medical experts. The potential of expert systems in medicine becomes widely recognized.

Dan Bricklin and Bob Frankston establish the personal computer as a serious business tool when they develop VisiCalc, the first electronic spreadsheet.

AI industry revenue is a few million dollars this year.

As neuron models are becoming potentially more sophisticated, the neural network paradigm begins to make a comeback, and networks with multiple layers are commonly ?sed.

Xerox introduces the Star Computer, thus launching the concept of Desktop Publishing. Apple’s Laserwriter, available in 1985, will further increase the viability of this inexpensive and efficient way for writers and artists to create their own finished documents.

IBM introduces its Personal Computer (PC).

The prototype of the Bubble Jet printer is presented by Canon.

Compact disc players are marketed for the first time.

Mitch Kapor presents Lotus 1-2-3, an enormously popular spreadsheet program.

Fax machines are fast becoming a necessity in the business world.

The Musical Instrument Digital Interface (MIDI) is presented in Los Angeles at the first North American Music Manufacturers show.

Six million personal computers are sold in the United States.

The Apple Macintosh introduces the “desktop metaphor,” pioneered at Xerox, including bit-mapped graphics, icons, and the mouse.

William Gibson uses the term cyberspace in his book Neuromancer.

The Kurzweil 250 (K250) synthesizer, considered to be the first electronic instrument to successfully emulate the sounds of acoustic instruments, is introduced to the market.

Marvin Minsky publishes The Society of Mind, in which he presents a theory of the mind where intelligence is seen to be the result of proper organization of a hierarchy of minds with simple mechanisms at the lowest level of the hierarchy.

MIT’s Media Laboratory is founded by Jerome Weisner and Nicholas Negroponte. The lab is dedicated to researching possible applications and interactions of computer science, sociology, and artificial intelligence in the context of media technology.

There are 116 million jobs in the United States, compared to 12 million in 1870. In the same period, the number of those employed has grown from 31 percent to 48 percent, and the per capita GNP in constant dollars has increased by 600 percent. These trends show no signs of abating.

Electronic keyboards account for 55.2 percent of the American musical keyboard market, up from 9.5 percent in 1980.

Life expectancy is about 74 years in the United States. Only 3 percent of the American workforce is involved in the production of food. Fully 76 percent of American adults have high-school diplomas, and 7.3 million U.S. students are enrolled in college.

NYSE stocks have their greatest single-day loss due, in part, to computerized trading.

Current speech systems can provide any one of the following: a large vocabulary, continuous speech recognition, or speaker independence.

Robotic-vision systems are now a $300 million industry and will grow to $800 million by 1990.

Computer memory today costs only one hundred millionth of what it did in 1950.

Marvin Minsky and Seymour Papert publish a revised edition of Perceptrons in which they discuss recent developments in neural network machinery for intelligence.

In the United States, 4,700,000 microcomputers, 120,000 minicomputers, and 11,500 mainframes are sold this year.

W. Daniel Hillis’s Connection Machine is capable of 65,536 computations at the same time.

Notebook computers are replacing the bigger laptops in popularity.

Intel introduces the 16-megahertz (MHz) 80386SX, 2.5 MIPS microprocessor.

Nautilus, the first CD-ROM magazine, is published.

The development of HypterText Markup Language by researcher Tim Berners-Lee and its release by CERN, the high-energy physics laboratory in Geneva, Switzerland, leads to the conception of the World Wide Web.

Cell phones and e-mail are increasing in popularity as business and pe?sonal communication tools.

The first double-speed CD-ROM drive becomes available from NEC.

The first personal digital assistant (PDA), a hand-held computer, is introduced at the Consumer Electronics Show in Chicago. The developer is Apple Computer.

The Pentium 32-bit microprocessor is launched by Intel. This chip has 3.1 million transistors.

The World Wide Web emerges.

America Online now has more than 1 million subscribers.

Scanners and CD-ROMs are becoming widely used.

Digital Equipment Corporation introduces a 300-MHz version of the Alpha AXP processor that executes 1 billion instructions per second.

Compaq Computer and NEC Computer Systems ship hand-held computers running Windows CE.

NEC Electronics ships the R4101 processor for personal digital assistants. It includes a touch-screen interface.

Deep Blue defeats Gary Kasparov, the world chess champion, in a regulation tournament.

Dragon Systems introduces Naturally Speaking, the first continuous-speech dictation software product.

Video phones are being used in business settings.

Face-recognition systems are beginning to be used in payroll check-cashing machines.

The Dictation Division of Lernout & Hauspie Speech Products (formerly Kurzweil Applied Intelligence) introduces Voice Xpress Plus, the first continuous-speech-recognition program with the ability to understand natural-language commands.

Routine business transactions over the phone are beginning to be conducted between a human customer and an automated system that engages in a verbal dialogue with the customer (e.g., United Airlines reservations).

Investment funds are emerging that use evolutionary algorithms and neural nets to make investment decisions (e.g., Advanced Investment Technologies).

The World Wide Web is ubiquitous. It is routine for high-school students and local grocery stores to have web sites.

Automated personalities, which appear as animated faces that speak with realistic mouth movements and facial expressions, are working in laboratories. These personalities respond to the spoken statements and facial expressions of their human users. They are being developed to be used in future user interfaces for products and services, as personalized research and business assistants, and to conduct transactions.

Microvision’s Virtual Retina Display (VRD) projects images directly onto the user’s retinas. Although expensive, consumer versions are projected for 1999.

“Bluetooth” technology is being developed for “body” local area networks (LANs) and for wireless communication between personal computers and associated peripherals. Wireless communication is being developed for high-bandwidth connection to the Web.

Some many millenniums hence . . .
Intelligent beings consider the fate of the Universe.