Genie: Generative Interactive Environments
82 points by kuter 1 year ago | 16 comments- jasonjmcghee 1 year ago> Genie is capable of converting a variety of different prompts into interactive, playable environments that can be easily created, stepped into, and explored
If these are generating a fully interactive environments, why are all the clips ~1 second long?
Based on the first sentence in your paper, I would have expected a playable example as a demo. Or 20.
But reading a bit further into the paper, it sounds like the model needs to be actively running inference and will generate the next frame on the fly as actions are taken- is that correct?
- jparkerholder 1 year agoThat is correct! The model generates each frame on the fly.
- jparkerholder 1 year ago
- polygamous_bat 1 year agoFirstly, do these models learn a good physics grounding for nonsense actions? Like keep pressing down even when you are in the ground? Or will they phase you through the ground?
Secondly, why are all videos like half a second long? I thought video generation came much farther than this. My guess would be that the world models unravel at any length longer than that, which is (and has always been) the problem with models such as these. Minus the video generation part, we had pretty good world models for games already, see Dreamer line of work: https://danijar.com/project/dreamerv3/
- jparkerholder 1 year agoAuthor here :) Re: 1) typically no, but of course it can hallucinate just like LLMs. 2) Agreed but the key point missing is Dreamer is trained from an RL environment with action labels. Genie is trained exclusively from videos and learns an action space. This is the first version of something that is now possible and will only improve with scale.
- polygamous_bat 1 year agoThanks for braving the crowd here, you will unfortunately only find hard questions.
Anyway, about my second question: why are the videos only half second ish long? Does the model unravel after that?
Also
> This is the first version of something that is now possible and will only improve with scale.
11b params is already pretty large considering the stable diffusion and LLM scale. How much higher do we need to scale until we get something useful beyond simple setups?
- jparkerholder 1 year agoThe bigger issue is lack of generating novel content rather than a total "unravel". We focus on OOD images because our motivation is generating diverse environments, but these are much harder to play for longer vs images closer to the training videos. It is interesting because one of the things you gain when going from 1B->10B is the OOD images working at all. Note it is not even trivial to detect the character given our model does not train with any labels or have any inductive biases to do so.
Point of clarification -- we don't expect bigger models to be the only way to improve this and are working on innovations on the modeling side, however we don't want to overlook the significance of scaling either :)
- jparkerholder 1 year ago
- polygamous_bat 1 year ago
- jparkerholder 1 year ago
- nycdatasci 1 year agoThe results seem quite bad. Compare the static image and "game" in this one example: Static Image: https://lh3.googleusercontent.com/c0GV4hG0Xg0eqpsUS1z62v6aJ2... "Game": https://lh5.googleusercontent.com/L_WsAa1saPmj29DSKda_fzk15y...
In the video, the character becomes a pixelated mess. In the static image, the character is clearly on rocks in the foreground, but in the "game" we see the character magically jumping from the foreground rocks to the background structure which also contains significant distortions.
The extremely short demo videos make it slightly harder to catch these obvious issues.
- polygamous_bat 1 year agoWhat is the video resolution, 64x64? And even then it becomes blurry. Seems like another Google flag-plant-y paper filled with hot air that we will never see the source code or model for because it will expose how poor its capabilities are relative to competitors.
The internal politics at these places must be exhausting. Industry research was supposed to be free from the publish or perish mindset, but it seems like it just got replaced by a different kind of need for posturing.
- jparkerholder 1 year agoHey author here :) First, tough crowd, love it, always great to get feedback because we are actively working on improving the model. We are very happy to admit it is not perfect, but given not many people thought this was possible a year ago, I am quite excited to see the next step of improvement. This is like the GPT1 of foundation world models, and we have a fair few ideas in the works to speed up progress.
The resolution is 90p but we use an upsampler to make it 360p for examples on the website.
- nullptr_deref 1 year agoHow can I get started with this kind of research? Is it even possible without a PhD? Thanks.
- nullptr_deref 1 year ago
- jparkerholder 1 year ago
- polygamous_bat 1 year ago
- sqreept 1 year agoI've read twice the announcement and I can't tell what this is good for. Can you please dumb it down for me?
- snide 1 year agoI'm old an immediately assumed this would link to historical retrospective of GEnie
- mdrzn 1 year agoSeems very interesting, but as soon as I see "Google Research" or "Deepmind" now it's an instant turndown. Too much PR, not enough substance. Not targeting directly you guys with this research, but the company you work for.
- joloooo 1 year agoLooking forward to following your progress. I've been wanting to see how we might replace polygons for gaming long term, this seems like a step in the right direction.