Autoencoders
- Autoencoders are neural models that take an input and reconstruct the same input in their output.
- The reconstruction will not be perfect
- As a result the autoencoder is “lossy”
-
The autoencoder architecture compresses the information in the input to reach a “latent” space.
-
Autoencoders are considered to be trained in an unsupervised fashion.
- The latent space (or code, or bottleneck)
- contains all the information necessary to reconstruct the input.
- can be considered as a set of features that are not correlated to each other.
-
In contrast, the input may be full of correlated features.
- Examples autoencoder applications include:
- data denoising
- segmentation
- anomaly detection
- feature generation
Homework 10b (due April 8)
- Make all revisions requested by the instructor to your final project proposal.
Homework 11a (due April 19)
- Next class we will have a guest speaker
- Magdalena Fuentes (NYU postdoctoral fellow) will talk to us about her research.
- She will focus her presentation on these papes:
- Read Magdalena’s papers in detail
- On our course subreddit, there is a thread to discuss Magdalena’s papers.
- You must post two questions you have after reading the paper.
- Do not repeat questions that have already been asked.
- The sooner you post, the less likely it will be that somebody else already asked your question.
- You must also try to answer at least one of the questions asked by your peers.
Due April 19th at 11:59PM (Eastern Standard Time)
© Iran R. Roman 2022