Commit efc70be6 authored by Rita-Josy Haddoub's avatar Rita-Josy Haddoub

Update README.md

parent 5fcb8a28
......@@ -2,8 +2,7 @@
This is the pretrained model for _Béton_, along with its raw image dataset. The model uses -deep learning- through pix2pix, a conditional generative adversarial network autoencoder.
![Screen_Shot_2020-01-09_at_8.24.16_PM](/uploads/51c8187bc4ed980505d91a34a63a3705/Screen_Shot_2020-01-09_at_8.24.16_PM.png)
![Screen_Shot_2020-06-20_at_12.46.31_PM](/uploads/08cac686f2c7ca2c6292a3cbc846954d/Screen_Shot_2020-06-20_at_12.46.31_PM.png)![IMG_1_copy](/uploads/c939788693fa2b55fb8e2dde59b460b7/IMG_1_copy.png)
# Overview
A collection of photographs showing found Béton have been stored in a dedicated image dataset. With this dataset, I am exploring ways that _‘Machine Learners’_ refers to humans as much as it does to computers. As a single _‘variable’_ which represents experimentation and fragmentation, _Béton_ can be computationally seen within the ‘Latent Space.’ The Latent space is the hidden layer of machine learning which breaks its’ input apart, and tries to re-assemble it by learning possible compositions. To visualize the latent space, I feed my image dataset of _Béton_ to a network that de-codes this inner process and generates the re-done _Béton_.
......@@ -18,7 +17,7 @@ Project Adjustments to Covid-19 lockdown, and Lebanese Economic/Political/Social
**From Automation to Collaboration :**
Initially, I wanted to build my own network model which would compute the whole process of data input into latent sculpture. The output of the model would have been a step further from a reconstructed visualization as .JPG into an .STL file. The .STL file is a 3D render which is recognized by 3D printers. I was beginning to look into point clouds to automate this process through my model. And finally the .STL would have been rendered to concrete through a 3Dprinter.
I did not carry on to explore Tensorflow and configure my own GAN, but rather I trained my own model through pix2pix which has its’ pre-set properties. Using my knowledge of python from previous MachineLearning modules. With adjustment, I remained with my Data Input to physical Output idea. Instead of 3D printing a GAN into concrete, I went to a brick factory and molded a Beton myself following the meticulous designs generated by my model.
I did not carry on to explore Tensorflow and configure my own GAN, but rather I trained my own model through pix2pix which has its’ pre-set properties. Using my knowledge of python from previous MachineLearning modules. With adjustment, I remained with my Data Input to physical Output idea. Instead of 3D printing a GAN into concrete, I went to a brick factory and molded a Beton myself following the meticulous designs generated by my model. The concept idea of rendering a neural network to concrete remains as intended, however the process of scultping myself involves more collaboration with the network, rather than giving the project full automation.
# Data
![IMG_0791__1_](/uploads/b7d5c7985ee2c6fde213e1e856093536/IMG_0791__1_.jpg)
![IMG_5812_2](/uploads/d2d0b813cdf2e8bfa34104695f047bf4/IMG_5812_2.jpg)
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment