Originally published in The Warwick Engineer Fresher’s Edition 2015.
You can view the poster of the research here.
Originally from Rome Italy, I am a first year MORSE student who is currently interning in the WMG’s Product Evaluation Technology Group a multidisciplinary research team with the goal of providing real work solutions for various business sectors. The internship will last 8 weeks and during this time I will be researching workflows aimed the creation of immersive virtual environments from 3D point cloud scan data.
When discussing various approaches for turning point clouds into virtual 3D environments, one ought to remember that the factors which are believed to play the most important roles in determining which workflow to adopt are: computational power, desired level of realism and level of interactivity. On the one hand, one might choose to have a very high level of realism, which requires a substantially high computational power, but a near-to-zero level of interactivity by rending a single viewpoint with a physically based rendering engine. On the other hand, at a diametrical position, we find those environments that are fully implemented into a game engine such as Unity3D© that although providing freedom of movement and requiring an average gaming PC to run, do not offer the same degree of realism. However, our choices should not and are not limited to this tow approaches. The purpose of this article is to introduce to the audience a workflow for the creation of immersive VR environments from 3D point cloud scan data with the following characteristics: high level of realism, very low hardware requirements and a constrained level of movement in the model. Less generally, after creating the geometry described by the point cloud, we render it from different viewpoint as if we were shooting photos for creating 360° panoramas to then create a virtual walkthrough. In order to explain this approach, the article is divided into three sections. Section I will outline the process of mesh reconstruction particularly focusing on the principles behind a Hudini© custom-made asset that will assist the modeler by performing contour reconstruction thought the use of NURB. Section II will explain how to import the model so far created into a rendering environment. Section III will expand onto the characteristic of DWalk, an application that I have developed to visualise renders in a Google Street View© fashion.
I Mesh Reconstruction
As far as this workflow is considered, the point cloud shall not be fed into an automatic mesh re-constructor such as MeshLab© but, shall be used as a reference by the modeller. We can divide this process into two parts. Firstly, a simple geometry including the main features of the model is created. Secondly, the modeller has to decide the level of detail that the final mesh will contain and precede to their creation.

The first step is not bound by the choice of software as any professional 3D modelling software can be used. Regarding the second step, I have build a custom-made asset in Houdini© that will help the modeller by converting the points defining the contour of an object into a NURBS ready to be extruded – see Figure 1. In so doing, it is possible to cerate complex patterns that are close to the original geometry. Now that the geometry has been created, we can move onto the next phase.
II Rendering and materials
At this stage, rending a single viewpoint of our model would not meet the level of interaction bespoken in the introduction. In fact, the key behind this workflow is to render a pre-defined set of views, which will later be imported in DWalk. The pre-defined set of views consists of “nodes” made of a number of cameras pointing in different direction – see Figure 2.

To create and render the nodes, I have developed a Blender© add-on. The latter enables the user to specify the node’s resolution, i.e. the number of cameras and to set the rendering parameters as well as render the entire network. Now, we are ready to feed the rendered images into DWalk.
III DWalk
DWalk is an interactive image visualiser that enables the user to navigate through the images composing the above-defined nodes as if they were moving a virtual camera in 3D space. DWalk has been created in Unity3D© so that it would be portable to different platforms i.e. Mac, Windows and Linux. Further to importing nodes rendered in Blender, the application offers an overhead view of the entire network and the possibility to connect nodes with each other so that it is possible to walk between them. The key feature of DWalk is that the virtual environment displayed is not being rendered live but is made of pre-rendered sill-images that require negligible computational power to be viewed.
An overview of the main phases that characterizes the proposed workflow has been given. We have seen how it is possible to find a compromise between realism, computer power and interactivity. The end product of this workflow is a semi interactive environment that could be ideal for customer who do not need complete freedom of movement and who want to be able to display their models without being concerned of having a powerful computer at their disposal. There is still work to be done in order to automate this process as much as possible. For the moment, a network of nodes can only be created in Blender but, in the near future I will develop a plugin for Autodesk 3D Max© that will offer the same tools as for the Blender one.