After creating audiotracks with sliced-up rap vocals around 2010, i was curious if i could do the same with video. And then especially in realtime. Making a audio/video edit consisting of sliced-up samples isn't that difficult, you can take all the time in the world to perfect it. But doing this in realtime is something different. Then it becomes more like an instrument you would play. An instrument which you have to build, practise and master. With this idea in mind i started somewhere around 2018 to do some tests. I used some animated characters from the Crazytalk software to generate lipsync video's on some earlier synthesized vocals. In this way i could let those characters say anything i wanted.
When i figured out how to 'play' video-slices in realtime, i was also able to sequence different patterns for these slices. When video-slices where played via a sequence it gave me some freedom to experiment with effects on video as well as audio. It wouldn't make sense if a distortion kind of effect was enabled on the video, while the audio wasn't affected. So i synced both video and audio effects to the same midi controllers. See some examples below.
For the first tries i made use of stock effects from Ableton Live and Resolume. A bit later i found out about ISF shaders and implemented some of those in the setup.