Unity is a gaming framework and tool that’s used for a lot of games and graphic apps on a great range of platforms. It provides a platform agnostic way of building such a solution and publish it to a platform of your choice – being phone, tablet, the web, pc, Xbox and more.

Augmented Reality, Virtual Reality and Mixed Reality are new display and interaction techniques where 3D and animation is important. Unity is a perfect platform for this as well. This also applies to Mixed Reality devices like Hololens and the VR glasses for the Mixed Reality platform.

In this post I’ll explain the basics Unity uses. Of course there a lot of training resources and blogposts, this is my way of organizing things in a way I understand … and hopefully it’s useful for you too.


The start of everything is a project. That’s the file holding all of the files that make up the application or game. It has a folder-file structure.

Although Unity is aimed at publishing the end result to huge collection of platforms, during development you aim at one platform at a time. This can be set in the Build Settings. But of course you can change platforms during development to test of focus on another platform.


Every Unity project is made up of one or more scenes. A scene contains all of the content for that scene. You can have multiple scenes for complex scenarios or for large development teams to have them work together more easily.

Game objects

To fill a scene, game objects are used. A game object is a generic type that can also be used as a container.  Every game object has at least a Transform, which contains parameters to determine the position, the rotation and the scale. You can add design, interaction and/or behavior to a game object by adding one or multiple components.

Unity offers prefabs for common types of objects. There are 3D objects, like Cube and Sphere. There are 2D UI objects, like GUI Text or Button. But there are also more special objects like Camera, Lighting, Audio Source or Video.

Game objects can have a Tag. That’s just a label, but it can be used to identify a game object. It can be unique to one game object, but you can also reuse it over various game objects. There is a small standard list of tags provided, but you can add custom ones.


There is a huge collection of components that can be used in Game Objects: Physics types, Rendering types, Audio types, Video types and more.

The standard collection of components is huge, but a few are worth to highlight: Script, Rigidbody and Colliders.

Script component

The script component is the way to connect code to an object (code-behind construct). Code can be written in Javascript or C#. You can define the editor to be used, as there is no code editor in Unity. If you have Visual Studio 2017 installed, it teams perfectly with the Unity editor to edit scripts.

If you use C#, the script is in the form of a class. The context of the script is the Game Object it’s attached to. By using the GetComponent<T>() method, where T is the type of the component you want to retrieve, you can access the other components of the object.

Public properties are visible in the Unity Editor as properties of the script that can be set. Public functions can be called by other Game Objects if they have a reference to the object.

Of course you want game objects to interact with each other. The way to do that is to add a script component to a game object and add a public property to that script to hold a reference to another game object. This can be set in the Unity Editor.

Rigidbody component

If you want to apply physics behavior to a game object, it must have a Rigidbody component. The properties of this component define how it reacts to gravity, what the mass or the resistance (or drag) is and more.

Collider compontents

A collider is a definition of the 3D shape that is used in collision detection between objects. It can be exactly the shape of the game object, but also a more simplified layout. This is a common practice for performance reasons.

There are standard colliders like Box, Sphere, Wheel and Capsule colliders. But there is also a Mesh Collider, where the shape is defined by the mesh it contains (or all the polygons it’s made up of).


Unity offers a great collection of prefabs, but you can create them yourself as well. Just drag a game object, even with it’s child game objects, from the scene’s hierarchy to the project hierarchy and you have created a prefab that includes the game object and it’s children and all of the components attached to them, including the scripts. This way you can create logic to be reused over and over again in the application or game.

A standard prefab worth mentioning is the Camera.

The Camera

The camera is, as the name reveals, the way to look at all the objects you place in the environment. There are different types of camera’s that determine how the view is rendered (e.g. stereoscopic for VR or MR or flat for a game on a phone, etcetera).

The camera has all kinds of properties that influence the rendering. You can set the position of the camera in a 3D space, the zoom factor, but also the standard background that is displayed at the back of all the scene.

It’s also possible to have multiple camera’s. As you can specify where the camera’s output is shown, you can have multiple (virtual) monitors to display different viewpoints.

Learning Unity

What helped me starting with Unity are the (obvious) tutorials on https://unity3d.com/learn/tutorials. The Roll a Ball and Space Shooter tutorials are well constructed and help you get through the interface of the Unity editor. They even include ‘mistakes’ you can made while working on a project and how to solve them.