Quantum Neural Blockchain AI for NFTs

Just in time for April Fools Day: How to make NFT artwork for definitely-not-money-laundering (no artistic talent required).

This post is Part 3 of 3 in a larger series about NFTs. This particular post is a (farcical) guide on how to make an NFT that will capture the most hype, and by extension the most money. For reasons that will become apparent very soon, this post was released intentionally on April Fools Day.

If you clicked on the hyperlink to this section and skipped past the explanatory or cautionary details in the previous sections, my Google Analytics dashboard for this site actually lets me know how many times that particular link has been clicked from the main series page. According to my dashboard quite a lot of you have been clicking the shortcut to this spot.....and I mean A LOT of you...and many times more frequently than you've clicked on the hyperlinks to the previous sections... ............................................................................................ Y'know, I'm going to avoid reading too much into that and let my faith in humanity remain intact for today. Let's get into how we can go about making a valuable NFT.

“I have no patience for any of the previous guides in this series. How do I make an NFT that will make me a lot of money?”

In the previous posts in this series I went into detail about what makes an NFT valuable. However, if one wants to create and auction off an NFT for a high price starting from scratch, that demands much more guidance. After all, one can read all the books they want on public speaking, but keeping all of the various rules and patterns in mind simultaneously is much harder than just watching and practicing emulating a good public speaker. As such, I’ve detailed my thought process I’ve gone through in creating a highly-priced NFT artwork of my own, all while the art and the guide themselves serve as a colossal middle finger to the current NFT bubble and those seeking get-rich-quick schemes related to them.

Think Pen and Teller meets Banksy.

Contents

Links to code examples that can be run in-browser are indicated by this symbol:

The Theme: QNBAI

One way to make an NFT valuable would be to appeal as much to Silicon Valley Nouveau riche as possible. One way of doing this beyond digital scarcity would be to make as many hot buzzwords apply to this NFT as possible. It used to be the case that one could simply say a project was “blockchain for this” or “blockchain for that” and you could get investors to throw money at you, but today’s investors are (slightly) more discerning than that. We probably want to diversify our buzzword portfolio beyond “blockchain” if we want to compete with other NFTs.

For inspiration on how to do this, we turn to a surprising source: Stephen Wolfram.

Exactly 3 years ago to the day, Stephen Wolfram released a blog post titled Buzzword Convergence: Making Sense of Quantum Neural Blockchain AI. Wolfram decided on tackling some of the buzzwords that have (and still are) taking up space in pitch decks to make venture capitalists swoon, by imagining what it would be like if those buzzword’s werent quite so empty. As a common theme, Stephen wolfram points to the generation of complexity from simple rules, as well as to the dichotomy of reversible and irreversible processes. He then tied this back into physical laws and processes behind how the universe itself creates complexity. It was surprisingly profound as far as company April Fools day posts go (I certainly got more use out of it than I did YouTube’s April Fools introduction of Snoop Dogg as the project architect of their 360-degree video codec).

That being said, one of Stephen Wolfram’s pet projects is a theory of physics that models the universe as a cellular automata of hypergraphs. When all you have is a hammer, everything looks like a nail. If Stephen Wolfram’s “Hammer” is cellular-automata-based theories of everything, our Hammer is artificial digital scarcity that people will pay for. We can accomodate part of the “haven’t built up a brand at all” problem using this “Quantum neural Blockchain AI” framework as inspiration. Let’s break it down.

Examples of hypergraph visualizations produced during the Wolfram Physics project. It is beautiful, but what we’re going to do here is not that. What we’re going to do will be less profound than even Stephen Wolfram’s most flippant and forgettable remarks.

The “AI” Part

There’s a lot of debate about whether such AI techniques can truly be creative in the sense that humans can be, or if it’s even possible to attain that capability. As Francois Chollet pointed out in a recent tweet,

However, we’re interested in incorporating deep learning precisely because it makes headlines.

AI-based artwork has gotten a lot of attention lately, in part due to the recent release of OpenAI’s CLIP and DALL-E. While the full weights for DALL-E haven’ been released as of this writing, CLIP both has been released and is thankfully much simpler to implement. Here’s how CLIP works: CLIP pre-trains an image encoder and a text encoder to predict which images were paired with which texts in a given dataset (such as ImageNet). This behavior means one can use CLIP as a zero-shot classifier. To test this, the dataset’s class names are all turned into captions like ”a photo of a dog” to see what CLIP estimates best output for a given image.

CLIP overview. Nearly identical to the diagram from OpenAI’s official post, except I think the puppies in MY example images are cuter.

It would have been nice if we had DALL-E (a zero-shot learner on par with OpenAI’s GPT-3) as well, but we don’t need it if we already have CLIP. Not only is CLIP simple, it can easily combine with generator architectures and techniques that are not DALL-E. Here are some examples of those generators:

CLIP-guided
Generator
What it Does
DALL-E
discrete VAE (variational
autoencoder) component from DALL-E
StyleGAN2
Optimizes human faces
VQGAN
More stable
Transformer-based GAN.
BigGAN
Optimizes ImageNet classes
SIREN
Implicit Activation Functions
DiffVG
Optimizes SVGs with
Gradient Ascent
FastGAN
Faster GAN training
VGG19 Neural Style Transfer
VGG19’s conv4_1 to generate images
Stylized Neural Painter
Applies the Neural
Painter algorithm
FFT
Uses FFT (Fast Fourier Transform)
from Lucent/Lucid
GLaSS
Latent space exploration
with genetic algorithm

For example, take Waluigi (pictured below on the far left, and possibly one of the most forgettable characters every created by Nintendo). What happens if we parametrize CLIP with the text "a handsome Waluigi"? It turns out that OpenAI’s CLIP was trained on enough data to have at least a vague idea of the concept of “Waluigi”. To take a page from Platonic Idealism, CLIP has been trained to the point where it has the beginnings of an ideal form of “Waluigi”. As for the different generative models we’ve paired with CLIP, these different techniques give us different levels of abstraction and realism.

ON THE LEFT: Waluigi a one-off character created when the programmers of a tennis-themed Mario spinoff game were running out of ideas for doubles tennis. IN THE CENTER: Results of “a handsome Waluigi” fed as a prompt for CLIP+SIREN. ON THE RIGHT: Results of “a handsome Waluigi” fed as a prompt for CLIP+GLaSS.

One of these CLIP-based methods is Vadim Epstein’s CLIP+FFT, which uses OpenAI’s CLIP algorithm to judge whether images match a given caption and then uses an FFT algorithm to come up with new images to present to CLIP. Give it any random phrase, and CLIP+FFT will try its best to come up with a matching image. Janelle Shane demonstrated that this works for sea shanty lyrics as well, with outputs that can be layered into a surrealist music video.

However aesthetically pleasing our AI-generated artwork is, using open-sourced algorithms is kind of at odds with the scarcity aspect of our art. Throwing additional cloud credits at the generative model will probably result in diminishing returns. Rather than reflect on how there might be more to the value of computer generated art than the quantity of cloud credits we throw at making it, we can just get more creative with how we spend those cloud credits.

The “Quantum” Part

As Stephen Wolfram’s blockchain post explains, quantum mechanics has actually been around for more than a century, and it isn’t quite as mysterious as Deepak Chopra makes it out to be. Quantum computers were formulated as a response to a question: If there are processes in the universe like quantum mechanics that classical computers can’t simulate that well, why don’t we just close the gap by making computers out of these processes? As a result, a bunch of really smart people started making computers that used things like entanglement and quantum superposition to do calculations. It wasn’t as simple in practice for a few reasons, though. One fly in the ointment was that many people had different frameworks for how to build these computers. Another problem was that when it came to building the hardware, hardware that was great at shielding the quantum phenomena from measurement/decoherence was very hard to get outputs from (and vice versa). As a result, the state of quantum computing today is probably comparable to what classical computer hardware was like in the era of vacuum tubes or the very first transistors.

However we still have one thing working in our favor: quantum computing may be tricky and expensive as hell, but wasteful use of new technologies was exactly what we set out to do. This lends itself nicely to our approach to make computational outputs scarce. After all, if we want someone to buy our art as a status symbol, what better status symbol is there than using the forces that guide the building blocks of the universe to piss away money?

As mentioned before, we can get creative with how we spend our cloud credits. We have a few options for Cloud Quantum Computing.

  • AWS Braket - As yet another burden on developers making flashcards to keep track of all the AWS services to maintain their AWS Developer certificate, AWS added Braket as its quantum computing service. Not just offering simulators, at the time of this writing we can also access hardware from multiple hardware developers:

  • IBM - IBM also has quantum computing cloud service. While it’s fun dumping on IBM for trying to stay relevant compared to Honeywell’s quantum computing success, to their credit their quantum computing service does have one of the better UIs out of all these options.
  • Google Cloud - As of this writing, Google Cloud also has its Quantum simulator library Cirq. While that’s great for making classical quantum simulators, there’s a much longer waitlist for running the programs on Google’s own quantum computers in Santa Barbara, CA. This exclusivity could work in our favor in producing the artwork, but then again we are pursuing this “Quantum neural Blockchain AI” approach because we’re too impatient for other types of status-building pursued by artists.

Not to mention all those other issues with Google’s quantum computing research program.

For our piece, we will go with hardware used by AWS. Beyond the benefit of more hardware options and slightly shorter wait times, this gives us the benefit of hedging our bets in terms of computational scarcity. One current limiter on the output of GAN-generated art is the current microchip shortage. Given that AI-inference-capable chips that were once just used in PCs are now being used across all kinds of devices (e.g., phones, wearables, cars, drones, etc.), this new demand is putting a squeeze on both supplies of assembled chips and supplies of the rare-earth metals that go into semiconductors. Quantum Computers are still in the domain of research and haven’t hit primetime yet. However, should the time come when quantum computer de-noising gets good enough for many-qubit chips, this kind of scarcity will likely hit superconductors and dilution-fridge-insulators.

Yes I know Jeff Bezos stepped down, but I don’t want to retire this meme format.

By the way, you may have looked at the different hardware and saw that we’re deciding between 31-qubit, 11-qubit, or 5000-qubit machines. What gives? Isn’t more qubits the obvious choice? It’s actually not that simple. Yes, Qubit counts matter a little, but it heavily depends on how well those qubits can be shielded from noise from the rest of the universe (i.e., a few good-quality qubits is worth more than lots of low-quality qubits). The type of quantum computer these qubits are a part of also matters a lot as well. Quantum annealers (like DWave’s machines) are great if we want to solve an optimization problem. Quantum annealers are built so that quantum unconstrained binary optimization (QUBO) tasks and Ising models can be easily run on quantum hardware. They are much more geared towards these kinds of problems than Universal Gate-Based quantum computers (like Rigetti and IonQ’s hardware), but the latter is more versatile for problems that do not require QUBO or Ising models. At the risk of oversimplification, Quantum anealers vs. Universal Gate-based Quantum computers is kind of like comparing CPUs to GPUs: Which hardware we choose ultimately depends on the kind of program we want to run.

I apologize if that last analogy reminded certain readers of the ongoing GPU shortage (the one that was going on even before they all got bogged down in traffic in the Suez Canal). I should have known better.

This brings us to the question of what quantum algorithm we’ll actually be running on these machines. We can’t just assume all possible bidders would equate Quantum=Magic\text{Quantum}=\text{Magic} and give us their internet money. We need to decide what computations we’ll actually use this quantum hardware for.

Another option is to run a quantum algorithm to process some real world data. We have a few possible options:

Quantum Machine Learning: Plenty of machine learning algorithms have been adapted for quantum circuits. These range from quantum classifiers to quantum autoencoders to even quantum NLP methods. There are even libraries such as PennyLane built for programming quantum computers in a similar fashion as neural networks.

Tutorial on Quantum Neural Network Classifier

As you can imagine, these are much more expensive to carry out than traditional ML algorithms, even those on your expensive NVIDIA GPU. There’s also the problem of how to run the inference once we’re done training. Some of the basic features of classical computers that we would normally rely on for running machine learning on classical computers (like RAM or disk space for saving model weights) have yet to be reliably implemented. There are plenty of physicists and mathematicians (like Gil Kalai) who think that, because the amount of noise could scale directly with qubit number, denoising on quantum computers might never be good enough to make large applications like machine learning feasible except on paper.

Of course, there’s a reason why critics who believe QRAM might never be possible don’t often get that sweet research grant money for quantum comuting.

If we want to create a work of art, no less one in high-enough resolution to warrant minting a ground-truth NFT, we need to do something with a smaller data bandwidth. It would be great if we could find some algorithm that’s fundamental to all kinds of more famous quantum algorithms, and then incorporate that into our piece.

Quantum Statistical Analysis: Plenty of quantum algorithms have been devised for regular statistical analyis. Many of these involved repurposing algorithms like gaussian processes and bayesian networks for use on quantum chips, though we also have simpler ones at our disposal. If we really wanted, we could use a quantum computer for linear regression. Then again, we need to keep in mind that quantum computer hardware is still in its very early stages. Out of all the statistical algorithms at our disposal, we should pick one that’s versatile for a wide range of real-world data we could incorporate, but also feasible to run on one of the types of quantum hardware available to us.

Principle Component Analysis has applications in many fields randing from quantitative finance to neuroscience. It may serve as a preliminary step for many techniques in statistics and machine learning, or it could even be the means itself. PCA used to be extremely difficult on NSIQ computers, but the development of the variational quantum state eigensolver (VQSE) method has reduced the circuit-depth and qubit requirements to put it back in reach. Simply put, this is an approach for extracting the eigenvalues and eigenvectors from a density matrix. The eigenvector represents the axis of greatest variance, with the eigenvalue being the magnitude. If we’re looking for the 1st principal component, that’s pretty much all we’re looking for.

Tutorial on variational quantum state eigensolver (VQSE)

Schematic of our Variational Quantum State Eigensolver, which is implementable on the kind of Quantum Annealer available through Braket.

Feeding real-world data through VSQE, sounds like it could be perfect for our extravagant use of quantum hardware. Then again, this might not be as perfect as we’d hoped. Those of you who clicked on the links and read more into how VQSE works will observe that it’s a hybrid quantum algorithm. True, as a quantum simulator it might provide enough realism to produce practical and usable results for quantum chemistry, but as modern artists we were never interested in “practical” or “realistic” now were we?

Much of our target market will probably fall into the camp on the right side: Not really understanding how this works, but making a fuss that “hybrid quantum” ≠ “full quantum”. Credit goes to Zach Weiner of SMBC. Also, please do not make NFTs of art that is not your own. Beyond being legally and morally wrong, public blockchains provide a paper trail that the artist (and/or their more obsessive fans) can and will use to hunt you down.

What are our other options for full-quantum algorithms?

Quantum Fourier Transform (QFT): The QFT is an important subroutine to many quantum algorithms, most famously Shor’s algorithm for factoring and the quantum phase estimation (QPE) algorithm for estimating the eigenvalues of a unitary operator. The QFT can be performed efficiently on a quantum computer, using only O(n2)O(n^2) single-qubit Hadamard gates and two-qubit controlled phase shift gates, where nn is the number of qubits.

Tutorial on Quantum Fourier Transform (QFT)

A very basic version of a 2-qubit QFT circuit drawn in Quantum Flytrap, which I highly recommend checking out if you want a better intuition for photonic quantum circuits

We can compute a quantum fourier transformation defined by its similarity to Fast Fourier Transform as follows:

jαjjkαkk where αk1Nj=0N1e2πikjNαj\sum_j \alpha_j | j \rangle \rightarrow \sum_k \alpha_k | k \rangle \text{ where } \alpha_k \equiv \frac{1}{\sqrt{N}} \sum^{N-1}_{j=0} e^\frac{2 \pi i k j}{N} \alpha_j

FFT and QFT look pretty similar, right? Unfortunately, there are a few important differences between QFT and FFT that we need to keep in mind. Our first problem is that loading the coefficient (αj\alpha_j) for the computation bases is much less simple in practice. Our second problem is that there is no effective way to retrieve the coefficient (αk\alpha_k) of the computation bases after the transformation. The measurement part of this circuit returns the computation basis, not the coefficient. Since the QFT is usually used as a sub-component of other quantum algorithms this usually isn’t a problem, but in this case we actually want the coefficient. There’s also the fact that classical FFT will give us multiple frequencies, while the quantum fourier transform will only give us one output. Lastly, let’s also not forget that our inputs and outputs will all be in a binary format that’s limited by the number of decent qubits we will probably have.

Credit to the art goes to Penny Arcade (this is not the original text), who rumor has it is creating their own NFTs soon. Hopefully by raising awareness of that fact, I can persuade Mike and Jerry to forgive me for turning their comic into a meme format. Also, please do not make NFTs of art that is not your own. Beyond being legally and morally wrong, public blockchains provide a paper trail that the artist (and/or their more obsessive fans) can and will use to hunt you down.

In my defense, we had a perfectly good hybrid-quantum algorithm in the previous section, but nooOOOoooo our extravagant use of quantum computer time was apparently too good for that.

It’s not the end of the world, though. We still have options for even simpler but no less wasteful uses of our access to cloud quantum computers.

Quantum Randomness: Okay, now we’re scraping the bottom of the barrel. One option is to simply use the quantum hardware to generate perfect seed values for the generative models. After all, if we can convince potential buyers of our AI-generated art that one can’t just reproduce this with seed values like 42 or 1337 or 80085, we could add more to the scarcity aspect.

What our quantum sampler looks like in Quantum Flytrap. This is about as simple as quantum circuits get

However, this alone might not be enough to offset the drawbacks of using open-source algorithms. Also, as companies like Cloudfare have demonstrated with their wall of lava lamps, there’s far more artful methods for generating random numbers.

Ironically, this might be the kind of random number generator that attracts even the “Quantum==Magic” crowd. Clearly we have some fierce competition in artisanal randomness

Tutorial on Quantum Sampling

Much like a Hollywood screenwriter writing multiple versions of scenes to fit different budgets, we have a few options for algorithms to use with Braket. Still, even if we can settle on the quantum algorithm, and successfully run it on actual quantum hardware instead of a classical simulator, how do we choose what real-world data to run it on? As before, the buzzwords have provided guidance…

The “Neural” Part

There has been a marked uptick in companies branding themselves as “neuroscience” startups. This goes beyond the early-2010s startups falsely equating neural networks with biological neurons. Plenty of companies are getting into the brain-computer-interfaces (BCI) space, with the intent of reading, interpreting, or even manipulating the activity of the brain. Companies like Flow Neuroscience are working on ways to stimulate neurons as part of treatments for diseases like Alzheimer’s and depression Halo Neuroscience is targetting the training regimens of professional athletes. Dreem touts a “bone conduction” EEG headband to monitor brain activity during sleep. Companies like Kernel are developing methods for much-higher resolution brain-mapping. If these don’t sound familiar, you’ve probably heard of how Neuralink is developing ways to control software and devices using only brain activity (thanks to implants that directly link computers to fleshy brains).

You get the idea. A lot of stuff is going on with actual neurons this time around.

Of course, getting clear information from the brain is hard work. Tim Urban’s explains in depth why this is much harder than it sounds, even if it sounds pretty damn hard to start with. The short version is that current BCIs differ when it comes to invasiveness (i.e., whether surgery is needed), scale (how much of the brain’s activity can be recorded at once), spatial resolution (how few neurons signals can be narrowed down to), temporal resolution (how detaliled the recordings over a length of time are). Each one has tradeoffs whent it comes to all these qualities, and no BCI method exists yet that scores high marks on all of these metrics. With that in mind, I hope you’ll forgive me for not opening up my skull for the sake of an art piece.

One of the most complete connectomes created thus far. This was done for a fruit fly, and it would be great if we could see what a human connectome would look like. However, the fruit fly this connectome belonged to had to DIE first in order to get this. I’ll stick with non-invasive, non-lethal methods for now, thank you very much.

Still, we should be able to get something usable out of our current technology. Eliminating invasive BCI approaches helps us narrow down the technology we’re use for the “neural” part of this art piece. This eliminates technologies like electrocorticography (ECoG), local field potentials (LFP), and single-unit recording from the candidate list. Magnetoencephalography (MEG), electroencephalography (EEG), and functional magnetic resonance imaging (fMRI). All of these can provide information across the entire brain at once. MEG and fMRI have better spatial resolution than EEG (which just provides sums of activity across large sections of the brain), but MEG and fMRI can’t yet compete with the EEG’s ability to get signals right as they happen (great temporal resolution). Also, MEG and fMRI both require magnetically shielded rooms, and I don’t have the same access to these as when I worked in a neurology lab 4 years ago. For our purposes, fMRI and MEG both cost a pretty penny, but EEG is cheap as hell by comparison. We’ll go EEG for getting our neural signals.

Back in 2019 I was able to get my hands on a Neurosity Notion headset. There are a variety of pros and cons with the various EEG-based BCIs from companies mentioned above, but at the moment I’d say the most salient one is this: I actually have one of these lying around. Unlike tools like Kernel’s Flux, or any of the countless other BCIs, I don’t still have to wait around while a release date gets moved from 2020 to 2021. While I know that Neurosity released a new wearable called the Crown, I’m too impatient to order the new model so I’m working with what I’ve got. Now that we have the technology, we need to define what information we can get out of it, and by extension how we can incorporate this into our artistic workflow.

What the Notion looks like, and what I look like wearing it. No. I will not show the side or back profile that reveals the back of my quarantine haircut.

The Notion 2 is an 8-channel EEG headband (with an additional reference and bias sensor), with nodes aligned with the temporal and frontal axes of the brain. Using the 10-10 notation for EEG caps (the two “10”s refer to the fact that the actual distances between adjacent electrodes are 10% of either the total front-back or right-left distance of the skull), our electrodes are located at PO3 and PO4 around the parieto-occipital sulcus, F5 and F6 near the frontal lobe, C4 and C3 near the central sulcus, and CP4 and CP5 straddling the area near the central sulcus and parietal lobe, with additional reference and bias sensors at T7 and T8 over the temporal lobe.

A detailed version of the 10-10 system. Based on the locations this is mainly getting readings that are sums of activity in the temporal, frontal, and the parietal lobes plus the sulci in those general areas.

Of course, with all the noise provided by all the the hair, skin, and bone in between the sensors and actual brain, this level of detail in the locations could turn out to be totally meaningless in the long run (especially if I somehow manage to mess up putting it on correctly).

While this might not be as detailed as the denser EEG caps (it’s geared towards measuring brain waves related to attentional focus, not really the whole brain), it does have the added convenience of being easier to put on and calibrate (in my experience putting on an EEG cap usually requires a 2nd person to make sure it’s fit correctly). Plus, I don’t need to shave my head to get an adequate signal.

In terms of the types of information we can get, the Notion 2 as a product is designed to get information on how focused the wearer is on a particular task (it’s marketed towards software engineers). However, they recently added support for the Brainflow library for processing raw EEG signals. The standard practice for analyzing EEG readings is to shift the signals from the time-series domain to the frequency domain using our old friend FFT. This has been a practice in neuroscience for about a century, and has resulted in categorizations of brainwaves like the following:

Image credit goes to Dr. Yohan John, who is skeptical of most theories of what brain rhythms do.

Tutorial on manipulating EEG data with Brainflow Library

BCIs with better spatial resolution might render this kind of analysis obselete, but as with all our other design choices we’re sticking with what we’ve got. This approach to quantifying brain activity has plenty of drawbacks and pitfalls. However, it at least allows us to convert the readings of the human brain, with all it’s complexity and sensations and thoughts and feelings and hopes and dreams, into a low-information data packet that let’s us make some kind of decision about the direction of our quantum computer algorithms and generative AI models.

Another benefit of using EEGs, it is much easier than spike sorting silicon probe data

The “Blockchain” Part

If the rest of this post hasn’t hammered home the “Blockchain” part yet, here it is again. An NFT is on the Blockchain. This is a blockchain project. This is the part where we put this art piece of ours on the blockchain.

In the case of our NFT, we just need to make sure to really think through our deployment strategy. As mentioned earlier it would be great to combine IPFS and Arweave hashes, and make sure there’s some kind of on-chain properties of the art we can store as well. None our options is perfect, but this is a far cry better than just uploading JPEGs to NFT websites.

Then again, if all we care about is selling our NFT and letting the buyer worry about the file being accessible for the next decade, we can just choose the lazy option and go with an NFT marketplace like Viv3 (based on Flow, which is made by the cryptokitties people, and is also the same blockchain used by NBA Topshots)

Viv3, down for what feels like the 7th time. It works now, but I was too lazy to check back and get another screenshot.

Tutorial on programmatically interacting with the Flow blockchain

If needed we could just go with Beeple’s approach of providing additional physical documentation and verification.

Putting it all together

In the first stage, we acquire the brainwaves of the subject. This could either be the artist carying this out (i.e., myself) or someone acting as an artist model for brainwaves. These brainwaves can then be correlated with some sort of audio using simple statistical analysis or basic signal processing, both of which are simple enough to do even on today’s limited quantum computing hardware.

In the second stage, we use the quantum processing algorithm to separate out the brainwaves collected in the previous step. Putting entire brainwaves reads through a transformer network on a quantum computer might be beyond all the QC hardware in the world at the moment. However, we can use some clever signal processing to compress the brainwaves into a form usable by a quantum algorithm that can fit on current chips. If quantum neural networks are outside our budget, we can rely on something like VQSE or QFT described earlier. If we’re really on a shoestring budget, we can just use this resource as a fancy

In the third stage, we take the quantum computer outputs and use it for the initialization of our CLIP-based generative model. Depending on the algorithm we used we could pair the real-world data with keywords that can be used as inputs for CLIP. If our quantum computing outputs weren’t quite that detailed, we could always turn to using the outputs as either layer initializations or seed values.

Finally, we create our NFT. We can either try minting this NFT ourselves (making sure to make the most of IPFS and Arweave to ensure stability), or just mint the final product on a marketplace like Viv3 and hope prospective buyers have done very little due diligence on the stability of links on the marketplace’s blockchain (if they have, we can just provide the additional IRL documents). This would be followed by auctioning it off (though not without the appropriate marketting, of course).

A high level overview of this incredibly farcical use of cutting-edge technologies

This summary is light on specifics because there are so many ways we can modify it. The high level workflow, with all its branching and converging decision junctions, might look like a flowchart.

If we wanted to take this approach of using buzzword technologies as a crutch for lacking artistic talent or brand, we could add even more to the mix. If we wanted to shoehorn “space” on the end we could do all this cloud computing over Starlink’s internet service from a cabin in the woods. There’s enough Musk-worshippers out there willing to pay a premium for much sillier. If private AI gets enough hype we could train our custom CLIP models using OpenMined.

This even works if the goalposts for some of our buzzwords mode. For example, when people talk about AI they usually think of deep neural networks. If you bring up support vector machines, decision trees, or A*STAR (all of which were considered state-of-the-art at some point), you’ll get more responses of “Oh, that doesn’t really count as AI now does it?“. If our CLIP-guided generative models suddently don’t count as “AI” because of newer advances, we could do something like train OpenAI’s Dactyl to paint with a physical paintbrush.

Or we could just hard-code a Shadow Dextrous Hand to hold the paintbrush

What Does the Final Result Look like?

Wow, a lot of people clicked on the hyperlink to just this part of the post.

Can’t reveal that just yet. 😉

As discussed before, part of the value behind an NFT is the scarcity, and there’s all too many horror stories of long-time digital artists finding that their work was stolen and minted as an NFT without their knowledge. By the way, if you skipped right to this part of the post, I’ve mentioned several times that you should never mint an NFT of artwork that is not your own. Legal issues aside, the NFT provides a paper trail that the original artist (and possibly some of their more rabid fans) can and will use to hunt you down.

Also, pulling the subject matter for this post out of my ass was admittedly much quicker than creating the full piece. Some of the bottlenecks have included but not been limited to…

  • …coordinating with the brainwave model while the pandemic is still going on. 😷
  • …replacing an EEG electode that was stolen by a Saint Bernard (long story, but if we get the original electrode back nobody will want to put that thing anywhere near their head). 🐶
  • …running the actual quantum computations on AWS. 🔮
  • …needing to do much more due diligence on which NFT-minting platform to use (or whether to do it from scratch) given that it turned out many of the platorms offer little-to-no protection against link breakage. 🕵️
  • …much of my schedule being spent using these technologies for use cases that will actually impact people’s lives, working on projects that have actual practical value. 🗓️

Truth be told, part of the motivation for this post was to recycle a bunch of blog post ideas I’d been sitting on for a long time but had been too busy to release final drafts for. April Fools day seemed like the most thematically appropriate day for releasing this mashed-together, train-of-thought post.

Still, some of you (especially those of you working in Venture Capital) might be feeling more blue-balled than a defense contractor at the end of the Cold War. Not to worry, the art piece is in progress. In the end, the actual piece is being released separately from this post (possibly for the better). I’ll probably make some kind of announcement when that happens. If it does sell for a lot I’ll try to find a somewhat noble use for the proceeds (e.g., investing in longevity research).

Then after that? I’m not sure. Maybe I’ll take the joke even further by SPACing an LLC that’s a quantum neural AI NFT hedge fund, whatever that means. 🤷‍♂️

References


Cited as:

@article{mcateer2021ultimatenft,
    title = "Quantum Neural Blockchain AI for NFTs",
    author = "McAteer, Matthew",
    journal = "matthewmcateer.me",
    year = "2021",
    url = "https://matthewmcateer.me/blog/quantum-neural-blockchain-ai-for-nfts/"
}

If you notice mistakes and errors in this post, don’t hesitate to contact me at [contact at matthewmcateer dot me] and I will be very happy to correct them right away! Alternatily, you can follow me on Twitter and reach out to me there.

See you in the next post 😄

I write about AI, Biotech, and a bunch of other topics. Subscribe to get new posts by email!


This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

At least this isn't a full-screen popup

That'd be more annoying. Anyways, subscribe to my newsletter to get new posts by email! I write about AI, Biotech, and a bunch of other topics.


This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.