- This event has passed.
Dimension Reduction & Maximum Likelihood: How to compress your data while retaining the key features
November 30, 2021 @ 5:00 pm - 6:30 pm
Prerequisites: You do not need to have attended the earlier talks. If you know zero math and zero machine learning, then this talk is for you. Jeff will do his best to explain fairly hard mathematics to you. If you know a bunch of math and/or a bunch machine learning, then these talks are for you. Jeff tries to spin the ideas in new ways. Longer Abstract: A randomly chosen bit string cannot be compressed at all. But if there is a pattern to it, eg it represents an image, then maybe it can be compressed. Each pixel of an image is specified by one (or three) real numbers. If an image has thousands/millions of pixels, then each of these acts as a coordinate of the point where the image sits in a very high dimensional space. A set of such images then corresponds to a set of these points. We can understand the pattern of points/images as follows. Maximum Likelihood assumes that the given set of points/images were randomly chosen according a multi-dimensional normal distribution and then adjusts the parameters of this normal distribution in the way that maximizes the probability of getting the images that we have. The obtained parameters effectively fits an ellipse around the points/images in this high dimensional space. We then reduce the number of dimensions in our space by collapsing this ellipse along its least significant axises. Projecting each point/image to this lower dimensional space compresses the amount of information needed to represent each image. Virtual: https://events.vtools.ieee.org/m/289240