Scanner, 2011

In 2011 it was difficult to turn an object into an .stl file (it's still kind of hard). Out of a desire to scan and model interior spaces came a 360-degree, 3D laser scanner built for under $30 and written in Arduino and Processing. The scanner produced intelligible readings of spaces as well as unintelligible questions around our relationship to spaces and the role of imagination in perceiving spaces, both real and computed.  


Please expand the red tabs for further documentation. 

Close
Info

This scanner “sees” depth like the human eye sees depth—your eyes look at the same object from slightly offset angles, (since they sit inches apart on your face) and your brain reconciles these two views into a depth field. This scanner similarly triangulates a point’s z-position by comparing two lines of sight—one from the camera and one from a laser level. Any point in space existing in a z-plane other than that of the back reference wall will produce a visible skew in a corresponding x-value of light from the laser level: for example, shine the level on a blank wall and you’d see a straight vertical line, but if you were to step in front of that line, it would appear to follow the contours of your body. The scanner looks at these deviations of the x-positions of points in the laser light to calculate a point’s z-position.

Info

The scanner’s hardware involves a webcam placed at the center of a motorized turntable and an attached laser level positioned so that its line pointed at a scanable space falls within the center of the webcam’s viewing frame. Its software pulls the webcam’s video footage into Processing, and uses Open CV blob detection on the footage as it is generated to "see" the space. The threshold for this detection is low enough to only pick up the laser’s red line, so within each frame of the video, e x, y, z positions of each point in space hit by the laser level is calculable within the given video frame. 


I calculated y in relation to a point’s y-position in the frame. I calculated z by looking at the point’s x-position in the frame, and subtracting that from an imagined x=0 position where the laser would have appeared if it had been shining on a flat wall. Knowing the x-position of the points in each frame would be the same, since the laser produces a single vertical line, and skewing of this line is produced only by changes in z, I set x equal to an initial value of zero and incremented it with each increment of theta in the turntable’s rotation. 


The table’s rotation is controlled by a motor and Arduino, which interfaces with Processing. Point data collected from scans are stored within an XYZ text file, which can be imported into 3D modeling software, (I used Rhino) and viewed as a pointcloud. There, the file can be saved as an .obj file, an .stl file, or another file type. 

Info

A screenshot of an inaccurate scan made with an earlier prototype of the scanner. I found that my initial "mistake" scans were more visually interesting than some of my later, more accurate scans. This file in particular looks more spatial than the "truer" readings of spaces below. 

Info

Something I consider when looking at the scans produced by this system is how much "thingness" each has. My interest in scanning in part comes from a curiosity in ideas and assumptions of "thingness", and in the points at which material things become digital things, and vice versa.

Info

The first functionally-accurate scan I took with a version of the scanner close to its current state: this scan, viewed from the top (like an ariel view), is of an empty room with a door open. The system plots points where it "sees" surface, or, more accurately, where it sees the laser level, which sweeps across all surfaces in the room. Because the x-values of the points increment as the theta of the turntable's rotation increments, the resulting scan appears as an unwrapped version of the room.

Info

A top-view of a scan of an empty room again, this time with the door closed. This scan demonstrates current problems with my software. To start, I'd like to eventually maintain the shape of the room in scans, without having to view a space as "unwrapped" by constantly-incrementing values of x. Another issue: for some reason, the scanner's z-calculation fails around the corners of rooms. This may be related to how the scanner reads the "jump" that the laser level makes when sweeping from one wall to an adjacent wall, or, it could be a result of deeper problems in my software related to x- and z-calclations. I'd like to balance 1.) improving the accuracy of my software with 2.) an engagement with the scanner as a tool for the production of images, video, and sculpture that interest me visually as well as technologically. 

Info

An image made with the scanner. Words pulled from a text file are plotted in a 3D space according to the spatial data acquired by the scanner. I’m curious about the relationship of language (1) to the acquisition of spatial data, (2) to the creation of spaces and objects, and (3) to math and code. 

Using Format