Practical Lambda ResNets

How well does this replacement for attention mechanisms in vision work?

UPDATE 10/23/2020: Just as I released this post, Google released new research on a new transformer architecture (the “performer”) with linearly-scaling memory. The idea is that this would make transformers more general purpose, and usable for tasks like ImageNet64 and protein folding. Had this research been released just 5 days earlier, this Lambda network post probably never would have been released. Despite that, I think most of you reading this will still find use in this post.

UPDATE 12/03/2020: Oh, Google’s released yet more new research on using transformers for visual tasks.

UPDATE 12/04/2020: I think at this point it will be more productive to produce another mega-post, this time to organize (my) thoughts on the various types of transformers, and by extension which ones are most useful for specific types of tasks. Others have done similar works ([Gabriel Ilharco’s work here is a particularly well-done example](https://gabrielilharco.com/publications/EMNLP2020Tutorial__HighPerformanceNLP.pdf)), but this one will be distinguished by 1) being more focused on practical implementation, and 2) a more consistent, living, updated resource. I’ll let everyone know when that’s released.

You can find the full github repo of one implementation (the one I’ll be using) here: https://github.com/lucidrains/lambda-networks. Normally with a research paper like this, I’d go through the trouble of making my own implementation. However, I’m using a previous version for a few reasons:

1. There are enough implementations/reviews now that a new one is increasingly redundant. This one has been making the rounds enough on Twitter almost to the point of breaking the anonyminity of the blind peer-review system.
2. There’s still a lot of work to be done exploring the network’s performance. Beyond runtime and memory, there are plenty of techniques for explaining, visualizing, and interpreting attention networks that have not been applied to Lambda ResNets.
3. Phil Wang included the actual lambda symbol in his implementation. That’s right. He used $\lambda$ in the symbols making up the python code like an absolute champion! I’m not to proud to admit that I just cannot surpass that, no matter how many fancy code-organization or performance-optimization tools I bring to the table.

How do Lambda Networks work? (the short version)

Plenty of researchers have been trying to make attention networks work for vision, almost like they see it as the inevitable next step. The problem is that most attention mechanisms suffer from quadratic memory burdens. Here the authors describe an alternative formulation

Is it everything it’s cracked up to be?

Some of the language of this paper seems a little strange, almost like it was intended to sound impressive as possible while saying very little. There’s also plenty of experiments where the results are simply Out-Of-Memory errors, showing that we arent’ quite out of the woods with Attention’s memory problems yet. That being said, if these ImageNet performance results are reproducible, then this would be a pretty huge moment in computer vision.

What’s it like to actually use these?

We’ve finally got a Tensorflow version of this network, so we’ll go with that.

import tensorflow as tf
from lambda_networks.tfkeras import LambdaLayer

layer = LambdaLayer(
dim_out = 32,
r = 23,
dim_k = 16,
dim_u = 1
)

x = tf.random.normal((1, 64, 64, 16)) # channel last format
layer(x) # (1, 64, 64, 32)

References

Cited as:

@article{mcateer2020lamresnets,
title = "Practical Lambda ResNets",
author = "McAteer, Matthew",
journal = "matthewmcateer.me",
year = "2020",
url = "https://matthewmcateer.me/blog/practical-lambda-resnets/"
}

If you notice mistakes and errors in this post, don’t hesitate to contact me at [contact at matthewmcateer dot me] and I will be very happy to correct them right away! Alternatily, you can follow me on Twitter and reach out to me there.

See you in the next post 😄

I write about AI, Biotech, and a bunch of other topics. Subscribe to get new posts by email!