Unity artificial intelligence cookbook 2018 pdf download






















Latest commit. Git stats 5 commits. Failed to load latest commit information. View code. Over 90 recipes to build and customize AI entities for your games with Unity What is this book about? Instructions and Navigations All of the code is organized into folders.

For example, Chapter MIT License. Releases No releases published. Packages 0 No packages published. Contributors 2. Programmer Books. Random Books. Articulate Storyline Essentials. Beginning SharePoint Development. Beginning SharePoint Beginning PowerShell for SharePoint Follow Us! We need a couple of basic behaviors called Seek and Flee ; place them right after the Agent class in the scripts' execution order.

Pursue and Evade are essentially the same algorithm, but differ in terms of the base class they derive from:. These behaviors rely on Seek and Flee and take into consideration the target's velocity in order to predict where it will go next, and they aim at that position using an internal extra object.

We learned how to implement simple behaviors for our agents. However, we also need to take into consideration that our games will probably need the help of the physics engine in Unity. In that case, we need to take into consideration whether our agents have the RigidBody component attached to them, and adjust our implementation accordingly.

Our first step is to remember the execution order of event functions, but now we also consider FixedUpdate , because we're handling behaviors on top of the physics engine:.

We added a member variable for storing the reference to a possible rigid body component, and also implemented FixedUpdate , similar to Update , but taking into consideration that we need to apply force to the rigid body instead of translating the object ourselves, because we're working on top of Unity's physics engine. Finally, we created a simple validation at the beginning of each function so they're called only when it applies.

For further information on the execution order of event functions, please refer to official Unity documentation:. Similar to Seek and Flee , the idea behind these algorithms is to apply the same principles and extend the functionality to a point where the agent stops automatically after a condition is met, either being close to its destination arrive , or far enough from a dangerous point leave. We need to create one file for the Arrive and Leave algorithms respectively, and remember to set their custom execution order.

They use the same approach, but in terms of implementation, the name of the member variables change, as well as some computations in the first half of the GetSteering function:. After calculating the direction to go in, the next calculations are based on two radii distances in order to know when to go full throttle, slow down, and stop; that's why we have several if statements. In the Arrive behavior, when the agent is too far, we aim for full throttle, progressively slow down when inside the proper radius, and finally stop when close enough to the target.

The inverse method applies to Leave :. Real-world aiming, just like in combat simulators, works a little differently to the widely-used automatic aiming process in almost every game. Imagine that you need to implement an agent controlling a tank turret or a humanized sniper; that's when this recipe comes in handy.

The algorithm computes the internal target orientation according to the vector between the agent and the real target. Then, it just delegates the work to its parent class. This technique works like a charm for random crowd simulations, animals, and almost any kind of NPC that requires random movement when idle.

We need to add another function to our AgentBehaviour class called OriToVec , which converts an orientation value to a vector:. We can regard it as a big three-step process in which we first manipulate the internal target position in a parameterized random way, face that position, and move accordingly:.

The behavior takes two radii in order into consideration to get a random position to go next, looks toward that random point, and converts the computed orientation into a direction vector in order to advance:. There are times when we need scripted routes, and it's simply inconceivable to do it entirely by code.

Imagine you're working on a stealth game. Would you code a route for every single guard? This technique will help you build a flexible path system for those situations. This is a long recipe that could be regarded as a big two-step process. First, we build the Path class, which abstracts points in the path from their specific spatial representations, and then we build the PathFollower behavior that makes use of that abstraction in order to get actual spatial points to follow:.

We use the Path class in order to have a movement guideline. It is the cornerstone, because it relies on GetParam to map an offset point to follow in its internal guideline, and it also uses GetPosition to convert that referential point to a position in the three-dimensional space along the segments.

The path-following algorithm just makes use of the path's functions in order to get a new position, update the target, and apply the Seek behavior. It's important to take into account the order in which the nodes are linked in the inspector for the path to work as expected.

A practical way to achieve this is to manually name the nodes with a reference number:. Also, we could define the OnDrawGizmos function in order to have a better visual reference of the path:. In crowd-simulation games, it would be unnatural to see agents behaving entirely like particles in a physics-based system.

The goal of this recipe is to create an agent capable of mimicking our peer-evasion movement. We need to create a tag called Agent and assign it to those game objects that we would like to avoid, and we also need to have the Agent script component attached to them:.

Given a list of agents, we take into consideration which one is closest, and, if it is close enough, we make it so that the agent tries to escape from the expected route of that first one according to its current velocity, so that they don't collide.

This behavior works well when combined with other behaviors using blending techniques some are included in this chapter ; otherwise, it's a starting point for your own collision avoidance algorithms. In this recipe, we will implement a behavior that imitates our own ability to evade walls.

That is, seeing what we have in front of us that could be considered as a wall or obstacle, and walk around it using a safety margin, trying to maintain our principal direction at the same time. This technique uses the RaycastHit structure and the Raycast function from the physics engine, so it's recommended that you look at the documents for a refresher in case you're a little rusty on the subject. We cast a ray in front of the agent, and when the ray collides with a wall, the target object is placed in a new position, with consideration given to its distance from the wall and the safety distance declared, delegating the steering calculations to the Seek behavior; this creates the illusion of the agent avoiding the wall.

We could extend this behavior by adding more rays, such as whiskers, in order to achieve better accuracy. Also, it is usually paired with other movement behaviors, such as Pursue , using blending:. For further information on the RaycastHit structure and the Raycast function, please refer to the official documentation available online at these links:.

The blending techniques allow you to add behaviors and mix them without creating new scripts every time you need a new type of hybrid agent. This is one of the most powerful techniques in this chapter, and it's probably the most widely-used behavior-blending approach because of its power and the low cost of implementation. We must add a new member variable to our AgentBehaviour class called weight and preferably assign a default value.

In this case, this is 1. We should refactor the Update function to incorporate weight as a parameter for the Agent class's SetSteering function. All in all, the new AgentBehaviour class should look something like this:. The weights are used to amplify the steering behavior result, and they're added to the main steering structure.

The weights don't necessarily need to add up to 1. The weight parameter is a reference for defining the relevance that the steering behavior will have among other parameters.



0コメント

  • 1000 / 1000