This website documents fragmented knowledge gained from my daily work.
I’ve been asked by quite a few people about what “language grounding” means. So I think I’ll write a short post explaining it, and specifically arguing why it is important to a truly intelligent agent.
Recently, I was working on a project that requires learning a latent representation with disentangled factors for high-dimensional inputs. For a brief introduction to disentanglement, while we could use an autoencoder (AE) to compress a high-dimensional input into a compact embedding, there is always dependence among the embedding dimensions, meaning that multiple dimensions always change together in a dependent way.
The other day I was reading the article “You and your research”, transcribed from a seminar by Richard Hamming. There is one paragraph about “choosing important problems” which I think is inspirational: