ZHdK, BA Interaction Desgin | Bits & Atoms III
Geopositioning of Specific Data in 3D
Interacting in 3D
Reading Data from GeoTIFFs
IR Tracking System and P5JS Framework
The Math behind the Geometry of the Earth
The Earth is abstracted as a sphere. It is segmented in equi-rectangular regions. These regions are fragments of degrees that indicate a precise location. The degrees are expressed in latitude values (180 º moving between the poles) and longitude values ( 360º moving along the equator).
The Math behind the Geometry of the Earth
The Math behind the Geometry of the Earth
For representing this in a graphic environment, we need to do a few things. One is translating from lat lon values (geo coordinates) into cartesian coordinates.
You might remember we did the same for 2D Vectors:
x = map(lon, -180,180,0, widthOfSomeDisplay);
y = map(lat, 90,-90,0, heightOfSomeDisplay);
The Math behind the Geometry of the Earth
Translation from geographical coordinates(lat,lon) to a 3D Cartesian Vector(x,y,z) for spatial representation:
x = R * cos(phi) * cos(theta);
y = R * cos(phi) * sin(theta);
z = R * sin(phi);
Where "R" is the radius of the Earth;
"phi" is the rotation angle expressed by latitude in radians;
"theta" is the rotation angle expressed by longitude in radians.
*** These formulas calculate the x location counter clockwise. As we use a shifted scale (-180 to 180 instead of 0 to 360), we need to invert this value when representing it on screen.
The Math behind the Geometry of the Earth
Exercise
Draw a sphere that represents the Earth.
Load an image of the Earth on top of it as a texture.
Draw a single or few GPS locations.
Find the code examples in github
Challenges
* Interacting with 3D using 2D sensing
* Navigation in 3D
Interacting with 3D vectors over a 2D Screen is a special case, which is common in videogames, but perhaps less common in interactive applications.
Screen Position
To make this possible we calculate screen coordinates for "x" and "y" according to the current 2D projections from 3D stuff on our viewport or canvas.
Luckily this is integrated in Processing and other frameworks as well.
In Processing we can use "screenX(), screenY(), screenZ()" .
For instance if we want to calculate 2D Vector of the 3D Vector (40,200,-100), we use the functions canvas.screenX(40,200,-100),and canvas.screenY(40,200,-100) respectively.
Each of them provides us with the 2D coordinates on screen for our target 3D Vector.
However the Math behind it is fairly comprehensible. If you are keen read: issue on adding screenX to P5js and actual screenPosition implementation for P5 Js
Navigation
For this we simplify our lives and use the PEasy Camera Libray, which allows us to make camera rotations, dragging, panning, zooming, etc.
Exercise
The same as the past excercise (Sphere, Earth Texture, Point of Interest).
Integrate the Peasy Camera Library and Framework.
Interact with a Point of Interest.
Find the code examples in github
Exercise
Similar to what we did on 2D
Load the List of Cities from the CSV table "future_cities.csv"
Visualize the cities on the 3D viewer
Find the code examples in github
GeoTIFF is similar to an image but encodes further metada inside each pixel.
How to translate this to 3D?
Exercise
Load geoTIFF image containing simplified data as a PImage
Get the xy location of each pixel and the brightness values
Store these data in a collection of objects
Translate the data from xy to geocoordinates and to 3D Vectors
Visualize something with these vectors, i.e. draw circles around the Earth
Find the code examples in github
Explanations
XY location of each pixel
While iterating throgh a collection of pixels (linearly), we encounter that each of the indexes of each pixel and knowing the width of the image, we can calculate their x and y coordinates using the following algorithms:
x = index%image.width;
y = (index-x)/image.width;
% -> stands for modulo operation
To translate the cartesian coordinates (x,y) into geo coordinates (lat,lon)
-> lon = map(x, 0,image.width,-180,180)
-> lat = map(y, 0,image.height,90,-90)
To convert the geo coordinates into 3D Vectors see slide 5
Find the code examples in github
Almost done!
IR Tracker on Github
IR Tracking written by Florian Bruggiser, allows to track IR Leds with a webcam and a special IR Filter. This system is mounted on top of the Touch Table.
Table Framework in P5 JS
Written on P5 JS and built with a node.js WEB framework. It integrates a server client architecture that allows to receive OSC messages directly from the IR Tracker Server. The Client parses already the messages coming from the IR Tracker. It is meant to be the baseline for your prototypes but you can also work the prototypes directly on Processing.