Passer au playerPasser au contenu principal
  • il y a 1 semaine
Adapting for the Unknown

Catégorie

🤖
Technologie
Transcription
00:01A.I. est un challenge technologique et un challenge scientifique.
00:06Nous ne savons pas encore construire des systèmes intelligents.
00:10C'est l'une des questions scientifiques de l'époque.
00:12Qu'est-ce que l'univers est-il ?
00:13Qu'est-ce que la vie est-il ?
00:14Comment fonctionne le cerveau ?
00:16Qu'est-ce que l'intelligence est-il ?
00:17Bonjour, mon nom est Yann Lequin.
00:20Je suis le chef AI scientifique à Meta.
00:22Nous pensons que la langue est très importante pour l'intelligence.
00:28Mais en fait, ce n'est pas le cas.
00:30Imagine un cube floating dans l'air en front de vous
00:31et imaginez-vous rotating un cube à 90 degrees.
00:34Vous pouvez imaginer cela dans votre tête.
00:36Et cela n'a rien à faire avec la langue.
00:39Les humains et les animaux naviguent le monde
00:40par construire des modèles mentaux de réalité.
00:43Qu'est-ce que l'on peut développer ce genre de sens commun,
00:45une capacité de faire des prédictions sur ce qui va se passer
00:47dans un espace abstracte représentation.
00:51Nous appelons ce concept un modèle mondial.
00:54Aller les machines à comprendre le monde physique
00:56est très différent de l'essentiel de comprendre la langue.
01:00Un modèle mondial est comme l'abstracte digitale de réalité
01:03que l'AI peut référence pour comprendre le monde
01:06et prédire les conséquences de ses actions.
01:08Et donc, il va pouvoir planer un cours d'action
01:10pour accomplir une certaine étape.
01:12Il n'a pas besoin de milliers de trials
01:14pour apprendre quelque chose de nouveau,
01:15parce que l'un modèle produit
01:17un fondamental de comprendre
01:18comment le monde fonctionne.
01:19l'impacte de l'AI qui peut réaliser
01:22l'utilisation de l'utilisation des modèles
01:24est vaste.
01:25Imagine l'assistive technologie
01:27qui aide les personnes
01:28avec des impairments visibles.
01:30L'AI des agents en réalité
01:32pourraient permettre de guérir
01:34sur des tasques complexe,
01:35qui permettent l'éducation plus personnalisée.
01:38Imagine l'AI des agents
01:40qui peut comprendre
01:41comment une nouvelle linee de code
01:43changer la forme de valeur
01:44de l'utilisation du programme,
01:45l'effet sur l'externel monde
01:47dans le contexte de code existent.
01:50Et bien sûr,
01:51les modèles sont essentiels
01:53pour des systèmes autonome
01:53comme cell-driving cars
01:55et robots.
01:56En fait, nous pensons
01:57que les modèles
01:57auront une nouvelle ère
01:59pour la robotique,
02:00permettant les agents
02:01d'agents real-world
02:02d'agents à aider
02:03aux tâches et aux tâches
02:04et aux tâches
02:04sans avoir besoin d'astronomical
02:06d'informations
02:06de données robotiques.
02:09C'est un temps très excitant
02:11pour l'enquête
02:11de l'enquête
02:12et un groupe
02:13captivating set
02:14de questions scientifiques
02:15devant nous.
02:16Nous voulons comprendre
02:17l'intelligence en soi
02:18ainsi que apprendre,
02:20le raisonnement
02:21et comprendre
02:22le monde physique
02:23afin de construire systèmes
02:24qui peuvent aider
02:25aux milliers
02:26dans leurs vies quotidiennes.
02:27Nous sommes heureux
02:28d'annoncer l'enquête
02:29de l'enquête
02:29de la vidéo GEPA version 2,
02:31le prochain step
02:31dans cette aventure.
02:33Restez avec nous
02:34quand nous continuons
02:34à explorer
02:35les possibilités
02:36des modèles mondiaux
02:37et pousser les boundaries
02:38de l'enquête de l'enquête.
02:43de l'enquête
02:46de l'enquête
03:03de l'enquête
03:05de l'enquête
03:05de l'enquête
03:06de l'enquête
03:06aujourd'hui.
03:07Je m'appelle
03:08Mélissa Heikkila.
03:09Je suis l'enquête
03:09de l'enquête
03:10d'être délaillée
03:13de Jann Lekam,
03:15la chef de l'enquête
03:16de l'enquête
03:16de l'enquête
03:17de l'enquête
03:17de l'enquête.
03:19Jann,
03:19chaque fois que je te voit,
03:21vous êtes en train
03:23de lancer
03:24un modèle de l'enquête
03:25de l'enquête
03:25de l'enquête
03:26de l'enquête
03:27et de l'enquête
03:29de l'enquête
03:29de l'enquête
03:31de l'enquête
03:35Il y a un VJPA V2, parce qu'il y avait un VJPA V1 un peu plus tard.
03:40En fait, c'est un modèle qui essaie d'advance à travers au moins trois dimensions, trois challenges de l
03:50'AI.
03:50Le premier étant des systèmes de l'AI qui comprennent le monde physique.
03:54Le deuxième, des systèmes qui sont capables de la raison et le troisième, qui sont capables de la planification.
04:00Donc, ce sont les trois principaux obstacles ou challenges que nous devons nous protéger pour avoir des systèmes de l
04:10'AI qui vont au niveau suivant.
04:13Donc, VJPA V1 est un des premiers systèmes qui peut vraiment apprendre comment le monde physique behaves par être trainé
04:22à prédire des vidéos, essentiellement.
04:25Donc, je vais vous montrer un petit peu de vidéo et corriger la vidéo en un certain moyen.
04:30Vous pouvez le voir un petit peu de vidéo ou juste le futur, le deuxième partie de la vidéo ou
04:36juste un whole chunk de ça.
04:37Et puis, vous vous traînez le système de filles-en-de-blancs, pour basically predicter ce qui est missing dans
04:43le vidéo.
04:43Nous avons essayé de faire ça depuis longtemps.
04:46En fait, c'est un exemple de l'idée de...
04:51Et c'est vraiment ce qui a apporté le succès de large language modèles.
04:56Et nous avons essayé de travailler sur cette vidéo pour 10 ans.
05:01En fait, encore plus.
05:03Mais...
05:04Et c'est été un peu un peu de failure, jusqu'à présent.
05:07Et la raison est que,
05:09c'est que c'est facile de prendre les mots qui vont suivre un texte,
05:15c'est pourquoi les LLMs ne sont pas bien.
05:17Vous ne pouvez pas faire ça avec la vidéo.
05:18Si vous traînez le système, vous montez une piece de vidéo et vous demandez ce qui vient de prendre la
05:22vidéo,
05:22il n'y a pas possible de prendre toutes les détails de ce qui se passe dans cette vidéo.
05:29Et donc, le JEPA,
05:31qui signifie Joint Embedding Predictive Architecture,
05:34et le V, qui s'appelle vidéo,
05:37ne peut pas essayer de reconstruire des pixels.
05:40Il essaie de apprendre une représentation, une représentation, une représentation de la vidéo,
05:45où beaucoup de détails sont éliminés.
05:48Donc, le système peut faire des prédictions dans cette représentation space.
05:53C'est la base de l'idée de JEPA.
05:55Et nous avons essayé de trouver des récipes,
05:58pour apprendre les choses à la large-scale,
06:00et appliquer ça à plusieurs domaines,
06:02pas juste vidéo, mais juste tout.
06:04Et le nouveau truc, c'est que vous utilisez des robots,
06:07de la vie,
06:07donc, la vie,
06:11de la vie,
06:11de la vie,
06:12de la vie,
06:14de la vie,
06:15de la vie,
06:15de la vie,
06:15de la vie.
06:16Et donc,
06:16pourquoi est-ce que ça a signifiqué ?
06:18Alors,
06:19ça m'intéresse le second challenge,
06:21de la vie,
06:22de la vie,
06:22de la vie,
06:24de la vie.
06:25Once the system has been trained just by watching video to basically predict what's missing
06:31in a video at the representation level, there's a second phase where you train what's called
06:36a world model.
06:37So a world model is something that tells you, given the state of the wallet at a particular
06:44time that you can observe from a short clip of video, and given an action that you imagine
06:51a robot taking, could the system predict what the state of the world is going to be after
06:57the action has been taken?
06:59And so if you have this kind of world model, the system that has this model of the world
07:07can imagine the consequence of a sequence of actions, and therefore can figure out a particular
07:13sequence of actions that will fulfill a goal, that will accomplish a task.
07:18That's planning.
07:19So that's one of the big challenges that AI systems should be able to do.
07:24And planning and reasoning are really the same thing.
07:26So this could give us a blueprint also for systems that are capable of reasoning.
07:31And what we've shown with VJPA V2 is that when we train such a world model, we can use it
07:36to basically plan the motions of a robot arm to grab an object or open a door or something
07:42of that type.
07:43Those are not particularly complicated tasks, but what's interesting about it is that the
07:47system can basically learn or can accomplish those tasks without being explicitly trained
07:52to do those tasks.
07:54It's just using its world model to predict what the consequences of its actions are going
07:57to be.
07:58And what are the real world use cases for this?
08:01Like, why do we need to build this and how can we use it?
08:03So everybody is talking about AI systems that can reason, AI systems that can plan, agentic
08:10systems that can take actions.
08:12To be able to take an action or a sequence of actions reliably, you have to imagine, be
08:19able to imagine what the result of those actions is going to be.
08:24If you take a sequence of actions blindly and you don't know what the effect is going to
08:28be, you're going to make some mistakes.
08:30And so current systems, you know, LLMs don't have those internal world models.
08:35They're basically planning blindly by just kind of pretty much regurgitating plans they've
08:39been trained to do or produce.
08:42If you have this concept of world model, and a lot of people in AI are working on this
08:47at the moment, then the system can just project, imagine, you know, what sequence of actions,
08:55what is going to be the effect of the sequence of actions, and then plan to accomplish a particular
08:59goal.
08:59Now, there's still a lot of challenges there on the way.
09:02It's not like we've solved the problem.
09:04And this approach also connects with some classical work in robotics, in motion planning.
09:09The difference here being that those world models are trained from data, and it's completely
09:14unsupervised.
09:15There is no label data or nothing like that.
09:19Now, for AI scientists, one of the biggest goals is understanding machine intelligence
09:24or creating it, you know, this concept called artificial general intelligence, AGI.
09:29How does this get us to that?
09:34Well, so I think to get to, so I don't like the phrase AGI, okay, because it's,
09:39it's used to designate systems that have sort of human level intelligence, if you want.
09:44And so that relies on the assumption that human intelligence is general.
09:47And I'm sorry to say human intelligence is not general at all.
09:51We're very specialized.
09:54Slightly more general than most animals, but, you know, some animals are smarter than us
09:58in certain areas, and certainly computers can solve certain tasks much better than we
10:02can.
10:02And so that means in some way we are specialized, right?
10:05So I prefer these two phrases that we use at Meta.
10:10One is ASI.
10:12It's a little scary.
10:12It means artificial super intelligence.
10:14It means, okay, by the time that we get to human level intelligence, we can get past.
10:18And in fact, we already have a number of tasks where machines are already superior to humans.
10:23So that's what we should be talking about.
10:26And super intelligence doesn't mean it's general.
10:29It just means it's, you know, slightly better than humans in most domains.
10:33And then there's a second phrase that we use, and it's AMI.
10:37We actually pronounce it the French way, ami, which means friend.
10:42And that's basically our blueprint or plan to reach ASI.
10:48So there's a long-term plan in the company to basically, you know, produce AI assistants
10:54that are as smart as humans or perhaps better, that will help us in our daily lives.
11:00It's a challenge that, you know, we set for ourselves 10 years ago, and now, but it was
11:04really sort of exploratory research at the time.
11:06And now it's become more of a, you know, kind of a product strategy, if you want, even
11:11though we don't know how to do it yet.
11:13We're kind of starting to see the end of the tunnel.
11:16So it becomes, it's becoming real.
11:20And so the AMI plan is a set of techniques that some of us have been working on to basically
11:29overcome those four challenges I was talking about.
11:32Systems that understand the physical world, have common sense, if you want, have persistence
11:36memory, can reason, can plan.
11:40And also a fifth one, which is systems that are controllable and safe.
11:44So basically you, you give them a task and they accomplish that task and that's it.
11:48They don't do anything else.
11:50Great.
11:50I'm glad you mentioned superintelligence because this week there's been a lot of reports
11:54about Meta looking to start up a new superintelligence AI unit led by SCALE's Alexander Wang.
12:02What's the plan there?
12:03Well, I can't comment on Alex Wang or whatever.
12:07This is being discussed, whatever.
12:09But what I can say is that the plan of FAIR, Fundamental AI Research, which is the sort of long
12:16-term
12:16research branch of Meta in AI, has always been to reach human intelligence and go beyond it.
12:24It's always been the plan when it was created 11 and a half years ago.
12:28So it's just that now we have kind of a clearer vision for how to accomplish this.
12:33And VJPA V2 is kind of a step in that direction, if you want.
12:37And so it's becoming so real now that, you know, it's kind of the ambition of the company
12:43to really kind of do this at a product level as well.
12:47So, you know, Mark Zuckerberg is seeing this as kind of an achievable goal.
12:51And that's, you know, that's why you hear this sentence being thrown around.
12:58But concretely, what would superintelligence look like?
13:03Well, okay, we already have a lot of AI systems that are better than us at many tasks, right?
13:09So you can buy a 30-year-old gadget that plays chess and will beat you at chess,
13:15which means humans are not very good at playing chess.
13:17It's, you know, similar for Go and poker and, you know, there's a number of games like this
13:21where computers are just better than humans.
13:24But it's also true of, I don't know, solving equations, computing integrals in mathematics
13:28or, you know, finding the best path in a map to go from one city to another, right?
13:33We're used to this.
13:34Like, the machines are better than us at this.
13:36In fact, that's really what we should work on.
13:40Machines that are better than us at certain tasks for which we're not particularly good, right?
13:44That's the best way to assist humans with computers.
13:47That's actually the history of computers, really, using computers for things that humans are not particularly good at.
13:54So it's just a continuation of this where, you know, a future in which, like, all of us will be
14:01working around
14:02with smart glasses or other smart devices within which there will be AI assistants that we can talk to,
14:10we can ask any question, maybe they display information in the glasses and things like that.
14:15And that will sort of amplify your intelligence, if you want.
14:18It's like having a staff of really smart virtual people kind of working with you at all times.
14:23You know, all of us will be like, you know, a CEO or a minister or a university professor, you
14:32know,
14:32with a staff of students who are smarter than themselves.
14:35So, you know, it's empowering.
14:37A lot of people kind of feel threatened by it.
14:39I think it's really empowering to work with people or machines that are smarter than you.
14:44Can you talk a bit about the work you're doing at FAIR?
14:46Because the team has gone through quite a lot of changes recently, right?
14:51Joel Pinault, the VP of AI Research, left this spring, and there's been a lot of reorgs.
14:55So, how is your work going to change and what do you need to do to get to that vision?
15:01Well, so, it's not changing particularly.
15:03Yeah, Joel actually left just a few weeks ago.
15:07And the new person leading FAIR is Rob Fergus, who actually was one of my first hires at FAIR back
15:1611 years ago.
15:18And then he spent five years at DeepMind and came back.
15:21So, now he's leading FAIR.
15:24And it's basically an opportunity that we're seeing to sort of reboot FAIR towards the ambitious goal of AMI or
15:33ASI.
15:35and sort of, you know, plot a path for the more sort of product-oriented part of META to kind
15:47of follow, perhaps,
15:49to make progress towards human-level AI, essentially.
15:54Not that much of our reorganization, really.
15:57Sorry?
15:58It's not much of a reorganization other than a change of leadership.
16:02But other than that, our mission stays the same.
16:05And so, what would that concretely look like for a company like META?
16:08Is it products other than chatbots or what?
16:14Well, I mean, there's chatbots, right?
16:15We're used to them now.
16:17We want systems of this type that are agentic, so they can sort of, you know, act in the digital
16:23world, at least.
16:25We want, perhaps, systems that have a mental model of the user they're talking to, so they're not telling us
16:31things that we already know or that we're not able to absorb, right?
16:35So, those systems will have to have some mental model of what we know, what we don't know, you know,
16:41what we are capable of digesting, what kind of information we could be interested in.
16:47And systems that can, you know, plan sequences of actions that might be complicated, you know, making reservations for, you
16:53know, travel and stuff like that.
16:55And then, eventually, also kind of similar things for the physical world, domestic robots, robots of various types.
17:05And META is actually getting involved in robotics in sort of a big way.
17:09And then, in between, there are things like code writing, okay?
17:14So, we have systems, I mean, this is probably one of the biggest applications of LLMs at the moment, is
17:19helping software developers write code.
17:23And there is one issue with this, which is that very often when the code is produced, it's globally maybe
17:29correct, but not completely correct.
17:32You have to kind of fix some bugs and kind of change a few things.
17:35And those are generally relatively simple pieces of code, right?
17:38You can't ask a system to design your new algorithms or figure out a data structure for a particular problem
17:44that they've never seen before.
17:45They're basically kind of regurgitating code they've seen, you know, over the corpus that they've been trained on.
17:52What we want is, and that's what we're working on, a system that can imagine what the result of executing
18:00an instruction or making a function call will cause to the state of the program.
18:05And then plan a sequence of instructions to arrive at a particular goal.
18:09So, it's a very similar problem that I was talking about earlier.
18:12The ability to plan, to imagine what the consequences of your actions are.
18:17That's required for dialogue systems, for agentic system, for code generation, and certainly for physical robots.
18:26Yeah, if only humans can do that, too.
18:29What's your vision for FAIR, right?
18:32Like, the AI field is so competitive, and it's a constant talent war.
18:37How are you making Meta's AI research stand out and positioning yourself in the market?
18:43Well, I think the plan that we are following for AMI, for Advanced Machine Intelligence, using those JEPA architectures and
18:51architectures that are, you know, capable of planning and reasoning and all that stuff,
18:55I think it's pretty original.
18:56I wrote, sort of, a vision paper that I published three years ago.
19:01I put online, saying, like, here is where I think AI research would go over the next 10 years.
19:07This was before JGPT, right?
19:09And, you know, it hasn't changed significantly in the details, yes.
19:13And since, you know, the last five, six years, we've been, sort of, making steady progress along this plan, basically.
19:21VJEPA is just the latest incarnation of this.
19:23And so we're clearly making progress on those systems.
19:27Now, here is what's interesting.
19:28Those architectures, those JEPA architectures, are not generative architectures.
19:34Because they don't try to reconstruct or predict the world that they are trained on, the data that they are
19:39directly trained on.
19:40They are trying to learn an abstract representation, and they make prediction in that abstract representation space.
19:46So those architectures are not generative.
19:50Now, this is our long-term plan.
19:52This is what we are betting, you know, quite a bit on at the research level.
19:57That doesn't mean that the current paradigm of LLM is not useful.
20:03There is obviously a lot of applications of LLMs that need to be developed, that are being developed.
20:10Foundation models that, you know, need to be open source.
20:13We're very big on open source at Meta, as you know.
20:16So it's not because you believe that reaching human-level intelligence requires a new paradigm like JEPA, that you necessarily
20:25believe that LLMs are not useful.
20:26They are useful.
20:27We should be working on them.
20:28We are working on them.
20:29But it's a shorter term.
20:31And FAIR has always been kind of three, five, or ten years ahead of, you know, the current fashion in
20:40AI.
20:40So that's still what we're working on.
20:43We have our eyes on the horizon.
20:45And what sort of timeline are you thinking at, looking at for superintelligence?
20:50Yeah, that's a trick question, right?
20:52I mean, you know, generations of AI researchers have been consistently wrong about this kind of estimate.
20:59So, okay, my story would be if all the planets lined up, if all the techniques that we are imagining
21:05developing work out,
21:07we're encountering the kind of usual difficulties in developing them and scaling them,
21:11we're going to have a good handle on whether this kind of JEPA approach is really going to take us
21:18there within three years.
21:20And then within five years, probably some, you know, early results or prototypes that may have systems that really understand
21:27the physical world,
21:28perhaps at the level of a cat or a rat or something like this, right?
21:31And then we're going to work our way up towards systems that can also, you know, plan not just in
21:38the sort of low-level physical world,
21:40but also in sort of abstract domains, perhaps connected with language or connected with mathematics or with geometry or things
21:48like that, right?
21:49And maybe reach human intelligence at some point, but it's almost certainly harder than we think.
21:56That doesn't mean we're not going to have useful, you know, artifacts on the way to there.
22:02So, I think we're going to start seeing kind of the more tangible applications of this maybe within five years
22:07or so.
22:08Okay, and what do the next six months look like for Meta AI?
22:12The next six months.
22:14The next six months.
22:16You know, for us, like, you know, we have like a plan of like so many problems to solve that
22:21we think we can solve,
22:23but we really need to work on them, right?
22:26So, the VJEPA V2 is one of the JEPA architectures.
22:31It's trained in a particular way on videos that we can train them on.
22:35I think the amount of training we've done is like a million videos or something.
22:39It's a lot.
22:40So, we can scale those models.
22:42We can scale the training.
22:45That's at least what we can demonstrate.
22:47We can show that we can use those models to plan simple actions for robots.
22:51Now, we have to apply this to kind of more situations.
22:54But also, we don't think we have the perfect recipe yet.
22:58Like, we think there is probably a better way of training those JEPA architectures that we haven't figured out yet.
23:03So, there's some work going on this.
23:05There are technical problems linked with planning.
23:08So, when you plan a sequence of actions, and it's true for humans as well,
23:12we can't plan a very long sequence of actions.
23:15If it's a complicated task that requires many actions, we need to plan hierarchically.
23:20And this is a completely unsolved problem.
23:23So, let's say I'm sitting in my office at NYU, and I want to be in Paris.
23:28I want to fly to Paris, right?
23:30So, I cannot plan my entire trip to Paris in terms of millisecond by millisecond muscle control,
23:37which is really the low-level actions I can take.
23:40I have to plan this trip at a very abstract high level.
23:44And at high level, I can say, well, I need to go to the airport and catch a plane.
23:50And I don't need to know the details.
23:51I don't need to know how bad the traffic is or how I'm going to get to the airport.
23:55I just need to go to the airport.
23:56So, now I have a sub-goal, going to the airport.
23:58I can plan a sequence of actions to get to the airport.
24:02I'm in New York, so I walk down on the street and hail a taxi.
24:05How do I walk down on the street?
24:07That's another sub-goal, where I need to get to the elevator, push the button, go down, and walk out
24:12the building.
24:13How do I go to the elevator?
24:15I need to stand up for my chair, pick up my bag, open the door, walk to the elevator, avoid
24:20the obstacles, and things like this.
24:22And it gets to a level where I have all the information I need to be able to take the
24:26action.
24:27So, this idea that we're doing everything we do, we plan hierarchically, is very powerful.
24:32We don't know how to do this with AI systems yet.
24:35We have some ideas, but it's still very much at the research level.
24:39And that's a big challenge to crack again for the next few years.
24:42I know we're at time, but what is your intuition?
24:45How do we do that?
24:48Well, we hire smart people, and we give them the problem, try to convince them that this is an interesting
24:56problem to work on,
24:57because most scientists and researchers have their own idea about what is a good idea to work on,
25:02so you can't really tell what to work on.
25:05You can do this with graduate students, though, so that's a good way to start.
25:10And then you try to kind of socialize this idea with other people,
25:14so that they give you an idea about perhaps how to approach that problem.
25:18So, the problem of planning, for example.
25:21There's a lot of very deep sort of questions that have to do with applied mathematics in there,
25:28more than computer science, really.
25:30So, you have to go talk to applied mathematicians and tell them,
25:34like, we have this problem that we don't know how to solve.
25:36It's an optimization problem.
25:37You guys have been working on this for 20 years, not necessarily for planning,
25:40but, you know, can you help us solve this problem?
25:44And it takes a whole scientific community,
25:46which is why it is very important to practice open research,
25:50because you need contributions.
25:52This is a scientific problem.
25:53It's not a technological development problem.
25:55You need to basically kind of gather all the talents that may contribute to this
26:02from the scientific community in academia, in, you know, other companies,
26:07and public research, particularly in Europe.
26:09So, that's why we need to practice open research.
26:12And then, the outcome of this is that we are open source of code.
26:14So, VJPA V2 is open source.
26:17That's a fantastic note to thank on.
26:19End on.
26:19Thank you so much, Jan.
26:20That was fascinating.
26:21And thank you so much for joining us today.
26:23Thank you.
26:24Thank you.
26:24Thank you.
26:25Thank you.
26:25Thank you.
26:25Thank you.
26:25Thank you.
26:25Thank you.
26:26Thank you.
26:26Thank you.
26:26Thank you.
26:26Thank you.
26:27Thank you.
26:27Thank you.
26:28Thank you.
Commentaires

Recommandations