Commit ee05b6d9 authored by Rita-Josy Haddoub's avatar Rita-Josy Haddoub
Browse files

Update README.md

parent 168729cf
......@@ -5,7 +5,19 @@ This is the source code and pretrained model for Beton. It uses -deep learning-
![Screen_Shot_2020-01-09_at_8.24.16_PM](/uploads/51c8187bc4ed980505d91a34a63a3705/Screen_Shot_2020-01-09_at_8.24.16_PM.png)
# Overview
A collection of photographs showing found Béton have been stored in a dedicated image dataset. With this dataset, I am exploring ways that _‘Machine Learners’_ refers to humans as much as it does to computers. As a single _‘variable’_ which represents experimentation and fragmentation, _Béton_ can be computationally seen within the ‘Latent Space.’ The Latent space is the hidden layer of machine learning which breaks its’ input apart, and tries to re-assemble it by learning possible compositions. To visualize the latent space, I feed my image dataset of _Béton_ to a network that de-codes this inner process and generates the re-done _Béton_.
Click here to see a simulated display for an exhibition.
Projection of the photo-grid on an entire wall surface. With the Beton sculptures displayed physically either on the floor or on plinths.
The photo grid shuffles through the entire jpg images fed through and produced by the Beton GAN model. So it includes my cropped 256x256 image dataset, the computed test-set, and the reconstructed visualization(latent image).
Project Adjustments to Covid-19 lockdown, and Lebanese Economic/Political/Social Unrest i.e. electricity cuts:
**From automation to collaboration :**
Initially, I wanted to build my own network model which would compute the whole process of data input into latent sculpture. The output of the model would have been a step further from a reconstructed visualization as .JPG into an .STL file. The .STL file is a 3D render which is recognized by 3D printers. I was beginning to look into point clouds to automate this process through my model. And finally the .STL would have been rendered to concrete through a 3Dprinter.
I did not carry on to explore Tensorflow and configure my own GAN, but rather I trained my own model through pix2pix which has its’ pre-set properties. Using my knowledge of python from previous MachineLearning modules. With adjustment, I remained with my Data Input to physical Output idea. Instead of 3D printing a GAN into concrete, I went to a brick factory and molded a Beton myself following the meticulous designs generated by my model.
# Data
# Rendering a Neural Network to Concrete
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment