Commit 3b6d84b7 authored by Rita-Josy Haddoub's avatar Rita-Josy Haddoub


parent 5ea48af0
......@@ -19,10 +19,15 @@ Project Adjustments to Covid-19 lockdown, and Lebanese Economic/Political/Social
Initially, I wanted to build my own network model which would compute the whole process of data input into latent sculpture. The output of the model would have been a step further from a reconstructed visualization as .JPG into an .STL file. The .STL file is a 3D render which is recognized by 3D printers. I was beginning to look into point clouds to automate this process through my model. And finally the .STL would have been rendered to concrete through a 3Dprinter.
I did not carry on to explore Tensorflow and configure my own GAN, but rather I trained my own model through pix2pix which has its’ pre-set properties. Using my knowledge of python from previous MachineLearning modules. With adjustment, I remained with my Data Input to physical Output idea. Instead of 3D printing a GAN into concrete, I went to a brick factory and molded a Beton myself following the meticulous designs generated by my model. The concept idea of rendering a neural network to concrete remains as intended, however the process of scultping myself involves more collaboration with the network, rather than giving the project full automation.
# Data
The Beton Dataset is a collection of self-taken photos of beton spontaneously found throughout the city of Beirut and its outskirts.
Scraping images from large online databases is the conventional way to collect images to work with GANS. Deep learning goes hand in hand with big data, as it deals with extracting meaningful information from a vast amount of information. Memo Aktins Deep Mediations illustrates GAN’s as a powerful medium for creative expression. Where his image dataset consist of ‘everything’. All scraped from Flickr with tags such as ‘life, love, faith, everything, etc.’.
The power of GANs to learn the distribution of all of its input data, and to find forms, compositions, and patterns within its endless network of possible trajectories makes it a fascinating tool to understand big data better, and distinguishes it from other algorithms.
My raw image dataset currently consists of about only 40 original images. The model however has been trained on a total of 274 images, consisting of re-sized, cropped, rotated, and transformed adjustments from the original dataset. I then ran a batch process to rescale the images to 256x256. The method used for training is ‘inpainting’ from pix2pix-tensorflow.
Creating the Beton dataset manually was by choice firstly because data mining is restricted to digitized and transparent information, and the definition I see Beton in falls out of its categorization within construction-sites. The dataset grew as i came across different uses of Beton, and it was never by intention to search for them. I did also receive images by text from others who would come across one. Getting the data was a very material and lived-in process.
# Rendering a Neural Network to Concrete
Markdown is supported
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment