Commit 20e36c80 authored by Rita-Josy Haddoub's avatar Rita-Josy Haddoub

Update README.md

parent 2d3ad2ce
......@@ -29,6 +29,19 @@ The power of GANs to learn the distribution of all of its input data, and to fi
My raw image dataset currently consists of about only 40 original images. The model however has been trained on a total of 274 images, consisting of re-sized, cropped, rotated, and transformed adjustments from the original dataset. I then ran a batch process to rescale the images to 256x256. The method used for training is ‘inpainting’ from pix2pix-tensorflow.
Creating the Beton dataset manually was by choice firstly because data mining is restricted to digitized and transparent information, and the definition I see Beton in falls out of its categorization within construction-sites. The dataset grew as i came across different uses of Beton, and it was never by intention to search for them. I did also receive images by text from others who would come across one. Getting the data was a very material and lived-in process.
# Latent Variables and _Béton_
“Unsupervised algorithms are used often in an exploratory setting when a data scientist wants to understand the data better, because there is no way to “tell” the algorithm what we are looking for.
Constraining an autoencoder during training(example:adding noise) pushes it to discover patterns in the data.”
From Classification algorithms that distribute entities into decision boundaries, to optimization models that maximize the objective function. Information is fed to the machine in hope that it can find relevant features, when however, the network is also demanding features of relevant information. In this case, humans and machines must deal with the same problem.
In a recent book on Machine Learning titled “Machine Learners” Adrian Mackenzie writes
> “Who or what is a machine learner? I am focusing on machine learners—a term that refers to both humans and machines or human-machine relations.”
Mackenzie uses the term machine learner to refer to the machine as much as to the learning human. While human-machine collaborations become frequented , Machine Learners perform an accumulation of data, and both make biases. And so the question of how to respond to unexpected situations remains as challenging. When the situation and its entities become un-recognizable, the biases begin to challenge themselves as they must redistribute their ideas within experimentation. The stage of experimentation is crucial in allowing and engaging with for different possibilities. I make analogies to this space through latency in computing, and through the use of Béton in Lebanon.
The space where the network is learning the distributions of its data is called the Latent Space. Within the latent space, neural nets autoencodes its data into really low resolution, and this low res data becomes the material it uses to interpolate through the form and composition of varying data inputs. In times of necessity, the Béton suddenly comes to life as it becomes re-purposed and re-identified. It fills cracked spaces that only adaptation can mend through. Its necessity cannot come from an optimized objective function, as it has de-routed relevance from another point of interest. It is comparable to a latent variable, broken up and dispersed, as it combines to different forms and compositions when spotted. Unpredicted situations forces set biases to be challenged, and stirs them to find new possibilities.
# Rendering a Neural Network to Concrete
![IMG_8151](/uploads/d74d17f02d4f004ec7cb6088492a99e2/IMG_8151.png)![IMG_8169](/uploads/ec64ef0b78f4872efa2319e8bc552cd5/IMG_8169.png)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment