Transforming Photogrammetry into a Post-Human Virtual Reality Experience

Transforming Photogrammetry into a Post-Human Virtual Reality Experience

Pierpaolo Grandinetti, Francesca Sonda, and Gianluca Rubino are NABA students specialising in MA Creative Media Production. Together, they discuss their VR project, 'Visioni Realizzabili,' exploring post-human vision through a luminous point cloud.

Pierpaolo Grandinetti, Francesca Sonda and Gianluca Rubino, are students of NABA, Nuova Accademia di Belle Arti, enrolled in MA Creative Media Production (New Media Arts).

Pierpaolo is a researcher in non linear narrative composition, creative coding, human-computer interaction and AI. Francesca is a visual artist, focused on video-art and experimental cinema. Gianluca is a creative coder and storyteller, skilled in programming and cross-media productions.

In this article, the team share the process of creating Visioni Realizzabili (Feasible Visions), a VR video project that explores the post-human vision through a luminous point cloud. The asset that sparked the idea is a photogrammetry of an abandoned Abbey in the South of Italy.

Project and goals

Visioni Realizzabili (Feasible Visions) immerses spectators in a utopian VR world represented by a luminous point cloud. In a peaceful atmosphere, spectators are guided by a narrating voice who introduces the post-human philosophy. They explore alternative realities where all forms of power and exploitation, characteristic of the Anthropocene, are overcome. This immersive journey prompts reflection on the self and aspirations for the future.

Our aim was to create an immersive experience that could immerse people into a post-human logic, that could create a new consciousness and give the viewers an opportunity to ask themselves new questions; questions about our relationships with other species and living beings, and the consequences of our actions on others.

The Italian name “Visioni Realizzabili” comes from trying to keep the acronym “VR”, but using two words that evoke the post-human concept of “something that could be”.

The abbey

The main asset in which the narrative is set is the abbey of Santa Maria di Corazzo in Castagna, in the province of Catanzaro, Calabria (Italy). The abbey, first built by the Benedictines and then rebuilt by the Cistercians, was then abandoned in 1783 due to an earthquake. We revived it by digitizing it using the technique of photogrammetry. So, we scanned the environment with the Polycam app, leveraging the lidar sensors in the iPhone 13 Pro.

The result of the photogrammetry inside Polycam


What is it and how it informed our text and our visuals?

Post-humanism is a set of ideas that have been emerging since around the 1990s. This philosophy decenters the placement of humans above other life forms, and at the same time rejects the view of humans as autonomous and fully defined individuals. Instead, it treats the human itself as an assemblage, co-evolving with other forms of life, enmeshed with the environment and technology. According to Donna Haraway, exponent of such philosophy, our posthuman future will be a time “when species meet”, and when humans finally make room for non-human things within the scope of our moral concern. A post-human ethic, therefore, encourages us to think outside of the interests of our own species, be less narcissistic in our conception of the world, and to take seriously the interests and rights of things that are different to us.

Starting from this concept, we developed our environment filled with living creatures, plants and animals, that together create a safe space where the visitor can feel welcomed and embraced.

The script that accompanies and leads the experience comes indeed from a re-elaboration of Haraway’s thought.

Why a Virtual Reality experience?

The project started as an assignment for a University Class in Virtual Reality experiences and then became a project of its own.

Thinking of being immersed in a 360° environment is very different from the typical “2D window” into the world that is cinema and traditional painting. This challenged us to create a world where the visitor would be immersed.

We decided to produce a video without interactive elements to let the visitor concentrate on the visual and on the voice. We generated two environments inside Unreal Engine 5 and then shot it with the virtual camera as 360° degrees frames. Those frames were then composited with music and voice inside Adobe Premiere Pro.

We use the word “visitor” throughout this article because we intend to present the work during art exhibitions where VR headsets are available.

3D assets AI generation

To create the entities within the dream world, we drew inspiration from the style of Female Pentimento and similar artists who draw heavily from “biospiritualism”, animals, and their potential mutations. We fine-tuned an AI model trained on various said artworks to generate images and videos to recreate a similar style. We then used Tripo to convert the images into 3D assets. See an example on Sketchfab.

Generated images with IA model fine tuned on Female Pentimento's art
The 3D octopus generated by Tripo form the image above

In the initial phase, we also employed the Tripo model for the creation of pulsating matter, leading to some serendipitous discoveries during the asset generation process. Particularly noteworthy is the creation of the brain entity, in the first world, which contains peculiar blobs within it. These unexpected formations were retained and integrated into the fabric of the first world.

The first world: inside the atom with cell and brain inspired visuals

The first world that the visitor sees is what we call “the first world”. We drew inspiration from the functions of the human body and from the brain. Every part is in perfect harmony, working together in a complex network of interconnections.

This concept of cellular interdependence, where synapses, neurons, cells, and organs collaborate to maintain balance and functionality, guided us in creating the very matter within which the viewer finds themselves. Just as each cell contributes to the functioning of the human body, the lights shining from outside in our project symbolise a “beyond”, a world in the making where not only internal elements integrate, but also the external world, with its places, people, and species, is part of a vast network of evolving interconnections.

The blue blobs in the equiangular view of the 360° video

The second world: point cloud and UE5’s Niagara particles

The visitor during the experience is then carried to the outer-world, outside of the single cell, of the single point of world one. It is, in fact, presented in front of a “regenerated” point cloud that we call world two.

This effect is to portray a dreamlike world, to create fascination and give a surreal situation in which one can be enveloped by. In other words, we enter into symbiosis with the surrounding environment and the species that populate it. We must feel to be part of them, thus be part of the change.

We used Unreal Engine's VFX system, Niagara, to transform the photogrammetry of the abbey, and all other 3D models, into a point cloud and make it suitable for storytelling. By using a Mesh Location attribute inside the Niagara system, we were able to tweak the density of the points on the original mesh.

In the end, the points that start moving in turbulence all around until the end overwhelm the visitor.

The final vortex in the equiangular view of a 360° video frame

The music and the AI voice

We wanted to create an environment where the visitor would have wanted to come back again after the VR experience. So, we thought about a meditation vibe that could relax and make someone feel at peace. That’s the reason behind the music choice. We found a piece by Deep Dreamer, a music creator on YouTube, called “Deep Into Nature”.

While for the AI voice we used Eleven Labs. It generated for us a voice that read our script, with the intonation we preferred. Adding AI in this project comes not only from a material necessity, but also because we can’t exclude machines in our holistic thought.

How to communicate a VR experience without a visor

If one does not experience the work with the headset, he/she will never fully understand it. Yet, we must be able to talk about it without one: this very article is an example.

So, what are the limitations and how to overcome them?

First of all, a VR headset is a private experience.

Secondly, showing a VR video via YouTube on a normal screen does not give enough insight of the experience to an interested viewer. We found that it is much more impactful to shoot a dedicated teaser with traditional “2D” shot composition and camera movements. We also avoided putting a person with a headset on in the teaser to avoid breaking the immersion.


This project helped us rethink the possibilities of the uses of the VR headset. It also gave us many insights into what is possible with current technologies.

We were very proud that we were able to talk about post-humanism in this very technological way. We can’t wait to explore this further.

Along the way, we picked up skills in writing, Unreal Engine, VR and IA content generation that will be useful for future projects.

All and all, it was a great production and all three of us are very satisfied.

Contact Pierpaolo, Francesca, and Gianluca via their linked Rookies profiles here.