<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="https://shkspr.mobi/blog/wp-content/themes/edent-wordpress-theme/rss-style.xsl" type="text/xsl"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	    xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	     xmlns:dc="http://purl.org/dc/elements/1.1/"
	   xmlns:atom="http://www.w3.org/2005/Atom"
	     xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	  xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>
<channel>
	<title>pandas &#8211; Terence Eden’s Blog</title>
	<atom:link href="https://shkspr.mobi/blog/tag/pandas/feed/" rel="self" type="application/rss+xml" />
	<link>https://shkspr.mobi/blog</link>
	<description>Regular nonsense about tech and its effects 🙃</description>
	<lastBuildDate>Fri, 10 Apr 2026 08:32:14 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title><![CDATA[Reconstructing 3D Models from The Last Jedi]]></title>
		<link>https://shkspr.mobi/blog/2018/04/reconstructing-3d-models-from-the-last-jedi/</link>
					<comments>https://shkspr.mobi/blog/2018/04/reconstructing-3d-models-from-the-last-jedi/#comments</comments>
				<dc:creator><![CDATA[@edent]]></dc:creator>
		<pubDate>Tue, 10 Apr 2018 11:15:44 +0000</pubDate>
				<category><![CDATA[/etc/]]></category>
		<category><![CDATA[3d]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[pandas]]></category>
		<category><![CDATA[python]]></category>
		<category><![CDATA[Star Wars]]></category>
		<guid isPermaLink="false">https://shkspr.mobi/blog/?p=29240</guid>

					<description><![CDATA[A quick tutorial in how to recover 3D information from your favourite 3D movies.  In this example, we&#039;ll be using Star Wars - The Last Jedi.  tl;dr? Here&#039;s the end result (this video is silent):  https://shkspr.mobi/blog/wp-content/uploads/2018/04/walker-text.mp4  Grab the code on GitHub.  Let&#039;s go!  Take a screenshot of your favourite scene.  Something with a clearly defined foreground and…]]></description>
										<content:encoded><![CDATA[<p>A quick tutorial in how to recover 3D information from your favourite 3D movies.</p>

<p>In this example, we'll be using <a href="https://amzn.to/2pZu15Z">Star Wars - The Last Jedi</a>.</p>

<p>tl;dr? Here's the end result (this video is silent):</p>

<p></p><div style="width: 620px;" class="wp-video"><video class="wp-video-shortcode" id="video-29240-10" width="620" height="318" preload="metadata" controls="controls"><source type="video/mp4" src="https://shkspr.mobi/blog/wp-content/uploads/2018/04/walker-text.mp4?_=10"><a href="https://shkspr.mobi/blog/wp-content/uploads/2018/04/walker-text.mp4">https://shkspr.mobi/blog/wp-content/uploads/2018/04/walker-text.mp4</a></video></div><p></p>

<p><a href="https://github.com/edent/3D-Screenshot-to-3D-Model/">Grab the code on GitHub</a>.</p>

<h2 id="lets-go"><a href="https://shkspr.mobi/blog/2018/04/reconstructing-3d-models-from-the-last-jedi/#lets-go">Let's go!</a></h2>

<p>Take a screenshot of your favourite scene.  Something with a clearly defined foreground and background.  The brighter the image the better the results.<br>
<img src="https://shkspr.mobi/blog/wp-content/uploads/2018/04/rey.jpg" alt="A stereo image pair. A young woman stands in front of a stony building." class="aligncenter size-full wp-image-29265" width="960" height="400"><br>
Split the image in two:</p>

<pre><code class="language-_">mogrify -crop 50%x100% +repage screenshot.png
</code></pre>

<p>As you can see, 3D movies compress the image horizontally. The separated screenshots will need to be restored to their full width.</p>

<pre><code class="language-_">mogrify -resize 200%x100% screenshot-*.*
</code></pre>

<p>The next step involves a little trial-and-error. Generating a depth map can be done in several ways and it takes time to find the right settings for a scene.</p>

<p>Here's <a href="https://docs.opencv.org/3.4.1/dd/d53/tutorial_py_depthmap.html">a basic Python script which will quickly generate a depth map</a>.</p>

<pre><code class="language-_">import numpy as np
import cv2
from matplotlib import pyplot as plt

leftImage  = cv2.imread('screenshot-0.png',0)
rightImage = cv2.imread('screenshot-1.png',0)

stereo = cv2.StereoBM_create(numDisparities=16, blockSize=15)
disparity = stereo.compute(leftImage,rightImage)
plt.imshow(disparity,'gray')
plt.show()
</code></pre>

<p>Depending on the settings used for <code>numDisparities</code> and <code>blockSize</code>, the depth map will look something like one of these images.</p>

<p><img src="https://shkspr.mobi/blog/wp-content/uploads/2018/04/different-depth-maps.png" alt="Different depth maps of various accuracy" class="aligncenter size-full wp-image-29243" width="533" height="715"></p>

<p>In these examples, lighter pixels represent points closer to the camera, and darker pixels represent points further away.  Optionally, you can clean up the image in your favourite photo editor.</p>

<p>The next step is to create three-dimensional mesh based on that depth, and then paint one of the original colour images onto it.</p>

<p>I recommend the excellent <a href="https://github.com/daavoo/pyntcloud">pyntcloud</a> library.</p>

<p>In order to speed things up, I resampled the images to be 192*80 - the larger the image, the slower this process will be.</p>

<pre><code class="language-_">import pandas as pd
import numpy as np
from pyntcloud import PyntCloud
from PIL import Image
</code></pre>

<p>Get the colour image. Convert the RGB values to a DataFrame:</p>

<pre><code class="language-_">colourImg    = Image.open("colour-small.png")
colourPixels = colourImg.convert("RGB")
</code></pre>

<p>Add the RGB values to the DataFrame <a href="https://stackoverflow.com/questions/49649215/pandas-image-to-dataframe">with a little help from StackOverflow</a>.</p>

<pre><code class="language-_">colourArray  = np.array(colourPixels.getdata()).reshape((colourImg.height, colourImg.width) + (3,))
indicesArray = np.moveaxis(np.indices((colourImg.height, colourImg.width)), 0, 2)
imageArray   = np.dstack((indicesArray, colourArray)).reshape((-1,5))
df = pd.DataFrame(imageArray, columns=["x", "y", "red","green","blue"])
</code></pre>

<p>Open the depth-map as a greyscale image. Convert it into an array of depths. Add it to the DataFrame</p>

<pre><code class="language-_">&gt;depthImg = Image.open('depth-small.png').convert('L')
depthArray = np.array(depthImg.getdata())
df.insert(loc=2, column='z', value=depthArray)
</code></pre>

<p>Convert it to a Point Cloud and display it:</p>

<pre><code class="language-_">df[['x','y','z']] = df[['x','y','z']].astype(float)
df[['red','green','blue']] = df[['red','green','blue']].astype(np.uint)
cloud = PyntCloud(df)
cloud.plot()
</code></pre>

<h2 id="result"><a href="https://shkspr.mobi/blog/2018/04/reconstructing-3d-models-from-the-last-jedi/#result">Result</a></h2>

<p>Here's the 192*80 image converted to 3D, and displayed in the browser:<br>
</p><div style="width: 620px;" class="wp-video"><video class="wp-video-shortcode" id="video-29240-11" width="620" height="318" preload="metadata" controls="controls"><source type="video/mp4" src="https://shkspr.mobi/blog/wp-content/uploads/2018/04/rey-small-pointcloud-text.mp4?_=11"><a href="https://shkspr.mobi/blog/wp-content/uploads/2018/04/rey-small-pointcloud-text.mp4">https://shkspr.mobi/blog/wp-content/uploads/2018/04/rey-small-pointcloud-text.mp4</a></video></div><p></p>

<p>Wow! Even on a scaled down image, it's quite impressive. The 3D-ness is highly exaggerated - the depth is between 0-255.  You can either play around with image normalisation, or adjust the values of <code>z</code> by using:</p>

<pre><code class="language-_">df['z'] = df['z']*0.5
</code></pre>

<p>The code is relatively quick to run, this is the result on the full resolution image. The depth is less exaggerated here, although I've multiplied it by 5.</p>

<p></p><div style="width: 620px;" class="wp-video"><video class="wp-video-shortcode" id="video-29240-12" width="620" height="318" preload="metadata" controls="controls"><source type="video/mp4" src="https://shkspr.mobi/blog/wp-content/uploads/2018/04/rey-hq-text.mp4?_=12"><a href="https://shkspr.mobi/blog/wp-content/uploads/2018/04/rey-hq-text.mp4">https://shkspr.mobi/blog/wp-content/uploads/2018/04/rey-hq-text.mp4</a></video></div><p></p>

<h2 id="creating-meshes"><a href="https://shkspr.mobi/blog/2018/04/reconstructing-3d-models-from-the-last-jedi/#creating-meshes">Creating Meshes</a></h2>

<p>For quick viewing, PyntCloud has a built in plotter suitable for running in Jupyter. If you don't have that, or want something higher quality, viewing 3D meshes is best done in <a href="http://www.meshlab.net/">MeshLab</a>.</p>

<p>PyntCloud can create meshes in the <a href="https://en.wikipedia.org/wiki/PLY_(file_format)">.ply format</a>:</p>

<pre><code class="language-_">cloud.to_file("hand.ply", also_save=["mesh","points"],as_text=True)
</code></pre>

<h2 id="better-depth-maps"><a href="https://shkspr.mobi/blog/2018/04/reconstructing-3d-models-from-the-last-jedi/#better-depth-maps">Better Depth Maps</a></h2>

<p>The key to getting this right is an accurate depth mapping. That's hard without knowing the separation of the cameras, or being able to meaningfully calibrate them.</p>

<p>For example, from this image:<br>
<img src="https://shkspr.mobi/blog/wp-content/uploads/2018/04/hand.jpg" alt="A stereo image pair. A hand reaches for some books." class="aligncenter size-full wp-image-29261" width="960" height="400"></p>

<p>We can calculate a basic depthmap:<br>
<img src="https://shkspr.mobi/blog/wp-content/uploads/2018/04/hand-depth-small.png" alt="A black and white image. The outline of a hand is bright white, the background fades the grey." class="aligncenter size-full wp-image-29255" width="800" height="333"></p>

<p>Which gives this 3D mesh:<br>
</p><div style="width: 620px;" class="wp-video"><video class="wp-video-shortcode" id="video-29240-13" width="620" height="318" preload="metadata" controls="controls"><source type="video/mp4" src="https://shkspr.mobi/blog/wp-content/uploads/2018/04/hand-simple-text.mp4?_=13"><a href="https://shkspr.mobi/blog/wp-content/uploads/2018/04/hand-simple-text.mp4">https://shkspr.mobi/blog/wp-content/uploads/2018/04/hand-simple-text.mp4</a></video></div><p></p>

<p>If you use a more complex algorithm to generate a more detailed map, you can get some quite extreme results.</p>

<p></p><div style="width: 620px;" class="wp-video"><video class="wp-video-shortcode" id="video-29240-14" width="620" height="318" preload="metadata" controls="controls"><source type="video/mp4" src="https://shkspr.mobi/blog/wp-content/uploads/2018/04/hand-complex.mp4?_=14"><a href="https://shkspr.mobi/blog/wp-content/uploads/2018/04/hand-complex.mp4">https://shkspr.mobi/blog/wp-content/uploads/2018/04/hand-complex.mp4</a></video></div><p></p>

<h3 id="depthmap-code"><a href="https://shkspr.mobi/blog/2018/04/reconstructing-3d-models-from-the-last-jedi/#depthmap-code">Depthmap Code</a></h3>

<p>I'm grateful to <a href="https://web.archive.org/web/20180410075642/https://timosam.com/python_opencv_depthimage">Timotheos Samartzidis for sharing his work</a>.</p>

<pre><code class="language-_">import cv2
import numpy as np
from sklearn.preprocessing import normalize

filename = "screenshot"

img_left  = cv2.imread(filename+'-1.png')
img_right = cv2.imread(filename+'-0.png')

window_size = 15

left_matcher = cv2.StereoSGBM_create(
    minDisparity=0,
    numDisparities=16,
    blockSize=5,
    P1=8 * 3 * window_size ** 2,
    P2=32 * 3 * window_size ** 2,
    # disp12MaxDiff=1,
    # uniquenessRatio=15,
    # speckleWindowSize=0,
    # speckleRange=2,
    # preFilterCap=63,
    # mode=cv2.STEREO_SGBM_MODE_SGBM_3WAY
)

right_matcher = cv2.ximgproc.createRightMatcher(left_matcher)

wls_filter = cv2.ximgproc.createDisparityWLSFilter(matcher_left=left_matcher)
wls_filter.setLambda(80000)
wls_filter.setSigmaColor(1.2)

disparity_left  = left_matcher.compute(img_left, img_right)
disparity_right = right_matcher.compute(img_right, img_left)
disparity_left  = np.int16(disparity_left)
disparity_right = np.int16(disparity_right)
filteredImg     = wls_filter.filter(disparity_left, img_left, None, disparity_right)

depth_map = cv2.normalize(src=filteredImg, dst=filteredImg, beta=0, alpha=255, norm_type=cv2.NORM_MINMAX);
depth_map = np.uint8(depth_map)
depth_map = cv2.bitwise_not(depth_map) # Invert image. Optional depending on stereo pair
cv2.imwrite(filename+"-depth.png",depth_map)
</code></pre>

<p>As good as this code is, you may need to tune the parameters on your images to get something acceptable.</p>

<h2 id="more-models"><a href="https://shkspr.mobi/blog/2018/04/reconstructing-3d-models-from-the-last-jedi/#more-models">More models!</a></h2>

<p>Here are a few of the interesting meshes I made from the movie.  Some are more accurate than others.</p>

<h3 id="snokes-head"><a href="https://shkspr.mobi/blog/2018/04/reconstructing-3d-models-from-the-last-jedi/#snokes-head">Snoke's Head</a></h3>

<p></p><div style="width: 620px;" class="wp-video"><video class="wp-video-shortcode" id="video-29240-15" width="620" height="318" preload="metadata" controls="controls"><source type="video/mp4" src="https://shkspr.mobi/blog/wp-content/uploads/2018/04/snoke-text.mp4?_=15"><a href="https://shkspr.mobi/blog/wp-content/uploads/2018/04/snoke-text.mp4">https://shkspr.mobi/blog/wp-content/uploads/2018/04/snoke-text.mp4</a></video></div><p></p>

<h3 id="throne-room-battle"><a href="https://shkspr.mobi/blog/2018/04/reconstructing-3d-models-from-the-last-jedi/#throne-room-battle">Throne Room Battle</a></h3>

<p></p><div style="width: 620px;" class="wp-video"><video class="wp-video-shortcode" id="video-29240-16" width="620" height="318" preload="metadata" controls="controls"><source type="video/mp4" src="https://shkspr.mobi/blog/wp-content/uploads/2018/04/throne-text.mp4?_=16"><a href="https://shkspr.mobi/blog/wp-content/uploads/2018/04/throne-text.mp4">https://shkspr.mobi/blog/wp-content/uploads/2018/04/throne-text.mp4</a></video></div><p></p>

<h3 id="v-4x-d-ski-speeders"><a href="https://shkspr.mobi/blog/2018/04/reconstructing-3d-models-from-the-last-jedi/#v-4x-d-ski-speeders">V-4X-D Ski Speeders</a></h3>

<p></p><div style="width: 620px;" class="wp-video"><video class="wp-video-shortcode" id="video-29240-17" width="620" height="318" preload="metadata" controls="controls"><source type="video/mp4" src="https://shkspr.mobi/blog/wp-content/uploads/2018/04/snow-text.mp4?_=17"><a href="https://shkspr.mobi/blog/wp-content/uploads/2018/04/snow-text.mp4">https://shkspr.mobi/blog/wp-content/uploads/2018/04/snow-text.mp4</a></video></div><p></p>

<h3 id="kylo-rens-tie-fighter-guns"><a href="https://shkspr.mobi/blog/2018/04/reconstructing-3d-models-from-the-last-jedi/#kylo-rens-tie-fighter-guns">Kylo Ren's TIE Fighter Guns</a></h3>

<p></p><div style="width: 620px;" class="wp-video"><video class="wp-video-shortcode" id="video-29240-18" width="620" height="318" preload="metadata" controls="controls"><source type="video/mp4" src="https://shkspr.mobi/blog/wp-content/uploads/2018/04/tie-text.mp4?_=18"><a href="https://shkspr.mobi/blog/wp-content/uploads/2018/04/tie-text.mp4">https://shkspr.mobi/blog/wp-content/uploads/2018/04/tie-text.mp4</a></video></div><p></p>

<h2 id="is-tlj-really-3d"><a href="https://shkspr.mobi/blog/2018/04/reconstructing-3d-models-from-the-last-jedi/#is-tlj-really-3d">Is TLJ <em>really</em> 3D?</a></h2>

<p><a href="https://www.bustle.com/p/is-the-last-jedi-in-3d-worth-it-theres-a-better-way-to-upgrade-your-experience-7549762">Nope</a>! The movie was filmed with regular cameras and converted in post-production.</p>

<p>As <a href="https://shkspr.mobi/blog/2016/11/how-3d-is-star-wars-the-force-awakens/">I've discussed before</a>, The Force Awakens has some scenes which have some reasonable 3D, but it wasn't a great conversion. I think TLJ is done much better - but I wish the CGI was properly rendered in 3D.</p>

<p>Much of the 3D-ness is one or two foreground elements floating against a background. If you want <em>real</em> 3D models, you need something shot and edited for 3D - for example this <a href="https://shkspr.mobi/blog/2013/11/creating-animated-gifs-from-3d-movies-hsbs-to-gif/">Doctor Who 3D special</a>:</p>

<p><iframe title="Converting Stereoscopic Images from HSBS Movies into 3D Models" width="620" height="465" src="https://www.youtube.com/embed/YGJ4qdoAfAw?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen=""></iframe></p>

<h2 id="further-reading"><a href="https://shkspr.mobi/blog/2018/04/reconstructing-3d-models-from-the-last-jedi/#further-reading">Further Reading</a></h2>

<p>Getting this working took me all around the interwibbles - here are a few resources that I used.</p>

<ul>
<li><a href="https://web.archive.org/web/20180405183853/http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html">Callibration of stereo cameras</a></li>
<li><a href="https://web.archive.org/web/20200925010505/https://rdmilligan.wordpress.com/2016/05/15/epipolar-geometry-and-depth-map-from-stereo-images/">Epipolar Geometry and Depth Map from stereo images</a></li>
<li><a href="https://rdmilligan.wordpress.com/2016/05/23/disparity-of-stereo-images-with-python-and-opencv/">Disparity of stereo images with Python and OpenCV</a></li>
<li><a href="https://erget.wordpress.com/2014/05/02/producing-3d-point-clouds-from-stereo-photos-tuning-the-block-matcher-for-best-results/">Optimizing point cloud production from stereo photos by tuning the block matcher</a></li>
<li><a href="https://albertarmea.com/post/opencv-stereo-camera/">Calculating a depth map from a stereo camera with OpenCV</a></li>
<li><a href="https://github.com/adrelino/ppf-reconstruction">3D Object Reconstruction using Point Pair Features</a></li>
<li><a href="https://web.archive.org/web/20200925010509/https://svncvpr.in.tum.de/redmine/projects/cvpr-ros-pkg/repository/entry/trunk/rgbd_benchmark/rgbd_benchmark_tools/scripts/generate_pointcloud.py">Dr. Jürgen Sturm's "Generate Pointcloud"</a></li>
</ul>

<h2 id="copyright"><a href="https://shkspr.mobi/blog/2018/04/reconstructing-3d-models-from-the-last-jedi/#copyright">Copyright</a></h2>

<p>Star Wars: The Last Jedi is copyright Lucasfilm Ltd.<br>
These 6 screenshots fall under the <a href="https://www.gov.uk/guidance/exceptions-to-copyright">UK's limited exceptions to copyright</a>.<br>
Any code I have written is available under the BSD License and is <a href="https://github.com/edent/3D-Screenshot-to-3D-Model/">available on GitHub</a>.</p>
<img src="https://shkspr.mobi/blog/wp-content/themes/edent-wordpress-theme/info/okgo.php?ID=29240&HTTP_REFERER=RSS" alt="" width="1" height="1" loading="eager">]]></content:encoded>
					
					<wfw:commentRss>https://shkspr.mobi/blog/2018/04/reconstructing-3d-models-from-the-last-jedi/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		<enclosure url="https://shkspr.mobi/blog/wp-content/uploads/2018/04/walker-text.mp4" length="2283290" type="video/mp4" />
<enclosure url="https://shkspr.mobi/blog/wp-content/uploads/2018/04/rey-small-pointcloud-text.mp4" length="2534113" type="video/mp4" />
<enclosure url="https://shkspr.mobi/blog/wp-content/uploads/2018/04/rey-hq-text.mp4" length="3773259" type="video/mp4" />
<enclosure url="https://shkspr.mobi/blog/wp-content/uploads/2018/04/hand-simple-text.mp4" length="1318009" type="video/mp4" />
<enclosure url="https://shkspr.mobi/blog/wp-content/uploads/2018/04/hand-complex.mp4" length="4720974" type="video/mp4" />
<enclosure url="https://shkspr.mobi/blog/wp-content/uploads/2018/04/snoke-text.mp4" length="1552649" type="video/mp4" />
<enclosure url="https://shkspr.mobi/blog/wp-content/uploads/2018/04/throne-text.mp4" length="4189882" type="video/mp4" />
<enclosure url="https://shkspr.mobi/blog/wp-content/uploads/2018/04/snow-text.mp4" length="3335439" type="video/mp4" />
<enclosure url="https://shkspr.mobi/blog/wp-content/uploads/2018/04/tie-text.mp4" length="2889951" type="video/mp4" />

			</item>
	</channel>
</rss>
