Improving on Flat Indexes

Beyond the flat indexes that perform exhaustive searches, FAISS also has methods that compress the vectors to decrease their memory footprint. To accomplish this, FAISS has very efficient implementations of a few basic components like K-meansPCA, and **Product Quantizer encoding decoding. **We can use these components solely for the functions they provide, but they are usually used in conjunction with other methods.

We’ve already seen how PCA can be used in Part 1 and here we will look at indexing based on the Product Quantization (PQ) vector compression algorithm. These indexes do not use tree-based indexes, but they achieve the speeds in distance calculations by approximating and largely simplifying the distance calculations.

#machine-learning #ai #artificial-intelligence #data-science

Speeding up similarity search in recommender systems with FAISS 
1.80 GEEK