By Angela Larasati (2001591022)

On the 28th of February, several students from BNSD and Computer Science had the chance to join a game prototyping event held in Binus International FX Campus. The game that would be tested were made by Masbro, a game developer from Indonesia. According to them, game prototyping is very important, especially for board games, as it involved external factors like people and their way in playing the games. Therefore, we had to list down the good and bad things about each game and give our inputs after we finished each game.

We played two different games that day, which were Bluffing Billionaire and Santet. Bluffing Billionaire was the first game we played, and the mechanism was very easy to understand so we could play around 2 up to 3 different rounds at that time. The idea of using public figures as the characters along with their richness and ability to attack other players was very unique and interesting. However, the game was done in circle, so we could only bluff to the person next to us. It would be better if we could bluff to the entire players and be attacked by all of the players as well.

On the other hand, Santet was harder to be understood as there were more rules and mechanism to follow. The game was all about attacking each other based on our character. Different characters meant different powers. Unfortunately, we found that one or two characters were overpowered and once they attack, the game would end or restart the game instantly. However, we enjoyed playing it as much as we played the previous one.

As we rarely held this kind of event in our campus, the enthusiasm of the students towards this was very high although board game is not as famous as digital video games nowadays. It might take some time at the beginning to understand the rules and mechanism of board games. However, board games do not require screens or monitors, but it requires direct human interactions. Through board games, people gather together and play together. Board games could turn strangers into friends or even friends into opponents. There were so many emotions that came out of each player; surprised when being attacked, depressed when it came to bankruptcy, and mostly joy that turned into laughter. It was a very enjoyable evening in the middle of our hectic school days.

Eutopia Project

Project Overview

Eutopia is a project that aims to create a digital environment that human can perceive in their daily activities. Unlike virtual reality that isolates human’s perception into the virtual world, with augmented reality (AR) human can still doing daily activities in the real world while enjoying the AR enhanced the environment. An ideal example of the successful product is depicted the PlayStation 3 video game, Heavy Rain. A fictional character of the title, Norman Jayden can work in his old small office while perceiving that he is working in a forest.


Illustration of Norman Jayden in his AR enhanced environment


Technical Information

Currently, this has only covered the AR content creation so far. The AR content is made using Vuforia SDK on Unity Daydream preview 5.4.2f2

Vuforia is a marker-based SDK. The main idea to create the digital environment is by making a mark to detect 5 sides of a room, namely: front, back, down, right, and left. Those direction names do not necessarily show where the user is facing. Because in practice, the user may not always focus on the markers, Extended tracking feature is used to create a stable experience. It is a feature in Vuforia SDK where the computer will remember where the markers are and keep projecting the imagery even without the markers on focus.

The user is expected to use the AR content by being in the room and wearing a head-mounted device as the AR machine. The AR machine is simulated by using an Android phone inside a Virtual Reality box. Therefore, the AR content is ported into an Android phone using Google Cardboard API.



At first, this project was initiated for an Entrepreneur course project under the same name. The project is about establishing a restaurant where people may enjoy dinning in custom environments with AR technology. The entrepreneur project has been officially discontinued and so does the development of the AR technology. However, modification and improvement of this project are encouraged.  This is an open sourced project that can be found in this git project:


Problems, Evaluations, and Future Development

To fully manipulate human’s perception of the environment, some vision-related factors need to be considered, for example, the colour and shadows of the objects in accordance with the lightning and distance; interactivity of the human and the objects. Those are still very limited in this project because the AR content is made with minimum recognition of the environment.

The extended tracking feature is not yet perfect. Some objects appear shaky and the AR content keeps resetting all the objects over a certain time. This may be fixed by customising the Unity scripts.

Sometimes the AR machine is unable to detect some image targets, especially when the target is not exposed by light directly or not close enough to the machine. Probably it is better to redesign the image target or even the whole room marking system.

Creating human interactivity with the AR produced objects may be achieved using external motion detector devices such as Leap Motion or motion detecting camera – like the ones used in some modern gaming console. Another alternative may be by porting the AR content into some head-mounted devices bundled with motion sensors such as Microsoft HoloLens or Oculus Rift.

Augmented Reality Content: What is and How to?

Mediated reality is the term to describe the ability to add or subtract information in order to manipulate human’s perception of the world. Virtual Reality (VR) is one subcategory in mediated reality which is now getting popular in entertainment related contents like video games and movies. There exists a sibling of VR, another subcategory of mediated reality. Unlike VR that ‘brings’ human into the virtual world, this sibling ‘brings’ objects of the virtual world to human’s side. Meet the Augmented Reality.
Augmented Reality (AR) enables human to see real objects with some computer generated information overlaying on it. While VR is growing in the entertainment sector, AR is more popular in professional or industrial-related technology, such as the Head-Up Display (HUD) for pilots, building visualization for architects, anatomy visualization for medical students or surgeons, product visualization for commercial purpose, or sports scoreboard display on the television.
Picture1 Picture2
The idea of AR is to blend computer generated imagery in human perspective of the world. The popular approach in delivering the idea is by making people view some objects through a computer screen that generates some additional images. Some Augmented reality contents also add sound or smell to enhance the virtual objects perception.
Producing an AR content involves many complex mathematical calculations, computer vision concepts, and computer graphic projections. Fortunately, some Augmented Reality SDKs (Software Development Kit) are available to enable straightforward development. An example of the leading ones is Vuforia. By using Vuforia integration with Unity, an AR content can be made swiftly under 30 minutes even for those with minimum computer science or engineering background.
Vuforia is using marker-based approach. This approach captures image from a camera and detects any specific mark. It may be an image, object, or the Vuforia’s copyrighted VUMark. The location or type of the mark will determine how the information is generated on the screen. The process of detecting marks and projecting imagery are done by the SDK. Developers only need to control which mark to use, what information to display, and how the information reacts.
Despite being easy to use, Vuforia possesses some advanced AR features such as extended tracking, Smart Terrain, simultaneous tracking, object recognition, and advanced camera API. The AR contents can be ported into personal computers and hand-held devices in popular systems: Android, Universal Windows Platform (UWP), and iOS system. It can also be in some head-mounted devices: Oculus Rift, Microsoft HoloLens, and Google Cardboard. The implementation of the SDK is integrated with some popular development tools such as Unity, Android Studio, XCode, and Adobe Illustrator (for VUMark creation). In addition to an official website that provides adequate guides and documentations for every feature, Vuforia is suitable for many developers in various backgrounds and needs.
AR technology contents is growing and we can expect to see faster growth because many industries start requesting for it and convenient SDKs are emerging. Vuforia is one example of the popular SDKs. It is excellent for both newbie trying their first AR content and experienced developers working on serious AR projects. So, start developing an AR content and feel sci-fi-like technologies!
To start developing, simply go to the Vuforia website and create an account. Choose which you need: for Unity, Android, UWP, etc. For beginner, it is highly recommended to use the Unity SDK since developing AR in Unity can be done by some simple drag-and-drops. Of course downloading Unity is a must too. Numerous documentations, guides, FAQs, and forum discussions are available for learning. Development is free; however, to enable commercial and enterprise supports, there are some paid plans to be processed.

TUTORIALS: How to Develop Simple Bluetooth Android Application

image source

By: Hansvin Tandi Sugata

The wireless-networking standard technology called bluetooth has quite become a common way to replace the wires on short distances. The term of Bluetooth was invented by Erricson in 1994. Bluetooth uses radio waves in the ISM band from 2.4 to 2.485 GHz (same as WIFI, but using a different technology). With a gadget such as a smartphone or tablet, a Bluetooth connection is the easiest way to send and receive data.

Some of you might ever think how to make a remotely controled device. An Android device has Java support, built-in Bluetooth module and a large variety of sensors. With an open architecture and a large community, Android allows anyone to build applications with simple tools and resources. In other words, anyone can build an applications for a smartphone or tablet with Android tools.

By using the buit-in Bluetooth module, we can actually do:

  1. Checking whether our device support bluetooth or not.
  2. Finding device in the nearby area and then pair it.
  3. Opening a RFCOMM (Radio Freqency Communication) channel, in such a way to send and recieve data.

This tutorials will teach you the basic of how to build an Android app using the an Android Studio development environment. Therefore, we need to install Android Studio as an Android application maker before we start to make an app. You can get Android Studio freely from this source ( ).


1. Ask for Bluetooth Permission.

Create a new project. Then, on the AndroidManifest.xml we put this code:


<uses-permission android:name="android.permission.BLUETOOTH" />
<uses-permission android:name="android.permission.BLUETOOTH_ADMIN" />


android.permission.BLUETOOTH means that we ask a permission to access the android Bluetooth, whereas android.permission.BLUETOOTH_ADMIN means that we ask a permission to find others nearby bluetooth so we can pair with it.

2. Make a command to turn on the Bluetooth

On this step, we will make a command to turn on the Bluetooth if the Bluetooth is off. The code bellow should be put on src/

public static final int REQUEST_ENABLE_BT=999;
protected void onCreate(Bundle savedInstanceState)

mBluetoothAdapter = BluetoothAdapter.getDefaultAdapter();
if (!mBluetoothAdapter.isEnabled())

Intent enableBtIntent = new Intent(BluetoothAdapter.ACTION_REQUEST_ENABLE);
startActivityForResult(enableBtIntent, REQUEST_ENABLE_BT);



protected void onActivityResult(int requestCode, int resultCode, Intent data)

if (requestCode == REQUEST_ENABLE_BT)

String message;
if (resultCode == RESULT_OK)

message = "Bluetooth is on";

} else {

message = "Bluetooth is off";

Toast toast = Toast.makeText(getApplicationContext(), message, Toast.LENGTH_LONG);;



3. Add an ArrayList to save the data and also to save the adapter.

private ArrayList<String> items = new ArrayList<>();
ArrayAdapter adapter;
private ArrayList<BluetoothDevice> arrayPairedDevices = new ArrayList<>();

API, the developers’ assistant


By: Ferlix Yanto Wang

API, which is also known as Application Programming Interface, is a sequence of program instructions that perform a specific task for building application software. API allows communication and data transfer between various software components from different devices. API can be used in an operating system, web-based system, database system, software library, and computer hardware. A good API makes it easier in developing a computer program since API assists the developer by providing building blocks which are going to be put together by the programmer. Similar with Graphical User Interface (GUI) which helps the user in using an application, API makes it easier for programmers to use certain technologies in building an application.

In order to assist the programmer in creating an application, API has the principle of information hiding. Through API, developers can access the objects or actions that they need from other systems without understanding the underlying implementation and process in obtaining that data. This principle enables the concept of modular programming in building an application. With API, the developers are provided with the tools that they would expect to have. By hiding the implementation of the process, API helps reduce the cognitive loads on the programmer.


Here is an example of the use of API. With an API, an application can check the current weather of a city by requesting the data from a website. The website’s API then will reply the request with a data about the weather condition (usually in the format of JSON). Furthermore, even though the display of the website changes, the structure of the request and response will remain the same since it has been documented by the website through the API. In comparison, without an API, an application can only find the current weather condition of a city by accessing the website that provides that data and read throughout the content of the webpage to obtain it. Furthermore, the application usually relies on the fact that the website’s display will never change since the application will not be able to parse the content of the webpage once its look has changed. In addition, these are several popular APIs that are mostly used by the developer in building an application:

  1. Google Maps API

Google Maps APIs lets developers embed Google Maps on webpages using a JavaScript or Flash interface. The Google Maps API is designed to work on mobile devices and desktop browsers.


  1. YouTube APIs

Google’s APIs lets developers integrate YouTube videos and functionality into websites or applications. YouTube APIs include the YouTube Analytics API, YouTube Data API, YouTube Live Streaming API, YouTube Player APIs and others.


  1. Flickr API

The Flickr API is used by developers to access the Flick photo sharing community data. The Flickr API consists of a set of callable methods, and some API endpoints.


  1. Twitter APIs

Twitter offers two APIs. The REST API allows developers to access core Twitter data and the Search API provides methods for developers to interact with Twitter Search and trends data.

In conclusion, API is a really useful and beneficial tool in assisting programmers during the process of developing an application. API allows developers to create an application that is rich in features without adding extra workload during the process.





Accelerometer and Gyroscope in Android Motion Sensors


By: Kensen

Older version of Android devices implement accelerometer and magnetometer to monitor the motion of the device. Accelerometer measures the linear acceleration of the device relative to the gravitational acceleration along one dimension, while magnetometer measures direction and strength of magnetic field along one dimension (like a compass). However, the use of magnetometer in motion sensor has become obsolete because earth magnetic field is weaker than common disturbances around such as steel furniture, so it is not effective in reading the lateral orientation of the device. Thus, Android devices now impelement gyroscope as a part of their motion sensors. Gyroscope measures the angular velocity of the device along one rotational axis.

The combination of accelerometer and gyroscope creates a powerful motion sensor. In a single Android motion sensor device there are 3 axis-perpendicular mounted accelerometers and 3 rotational-axis mounted gyroscopes. This motion sensor is applicable in health management apps such as movement tracking. Movement such as walking, running, and jumping can easily be recognized by the sensor based on the value of the acceleration (read by the accelerometer) and the angular velocity (read by the gyroscope) of the device.

Accelerometers use the standard sensor coordination system that applies the following conditions when a device is laying flat on a table in its natural orientation:

  1. If it is pushed to the right, the x-axis acceleration value is positive.
  2. If it is pushed away, the y-axis acceleration value is positive.
  3. If it is pushed towards the sky with an acceleration of A m/s2, the z-axis acceleration value is equal to A + 9.81 (A minus the force of gravity which is -9.81 m/s2).
  4. If it is not pushed at all, the z-axis acceleration value is equal to 9.81 m/s2.

Gyroscopes use the same coordinate system as accelerometers, whereas it applies on all axis that the rotation is positive if the device is rotated counter-clockwise.

In conclusion, current technology of motion sensors implements the use of accelerometer and gyroscope in monitoring the motion. The use of gyroscope is more effective than magnetometer because gyroscope can work on any enviroment without getting disturbed. Based on the reading of the value from the accelerometer and gyroscope, the device can pinpoint the exact motion of it.

Leap motion: its past and its future

By: Yoshua Muliawan

Leap motion is a relatively brand new American tech startup which focuses on producing and developing specialized computer hardware sensors that detects finger and palm motion as input. First developed in 2008, Leap motion has since produced an array of products such as the consumer – marketed leap controller as well as its latest accompanying software, Orion, which is designed for hand tracking in virtual reality.

The company had already received several rounds of investments in  2011 – 2013 following the initial angel investment that formed the company. With the money gathered, the Maths Ph.D cofounder, David Holz worked silently and finally launched its first product, dubbed the The Leap on May 21, 2012. Further iterations of the product have since been made including the latest iteration, named The Leap Controller.

The basic premise of the technology is actually quite simple. Each device is equipped with two sets of cameras as well as three infrared LED’s. The lamps emits light with a wavelength of 850 nanometres, making it invisible to the human eyes. The cameras will then capture the reflected infrared light and produce an image that will be sent to the computer. Once received by the computer, the Leap Motion Service software will process the image and generate a 3D map of the hand, which can be utilised in various apps that take advantage of the sensor.

The leap motion technology unlocks unlimited potential in its usage. For an average consumer like you and me, the leap motion technology allows us to experience more immersive and realistic games. Coupled with a VR headset such as an oculus rift, it can bring the already extremely realistic games to life. This technology can also be used in other fields, predominantly science. Just recently, NASA engineers at GDC control had revealed to the public that they had used the leap motion controller to remotely control a massive one-ton, six-legged ATHLETE rover located at the Jet Propulsion Lab (JPL) in Pasadena. The ATHLETE (All-Terrain Hex-Limbed Extra-Terrestrial Explorer) is a heavy-life utility vehicle prototype that was designed for future human space exploration to the moon, Mars and even asteroids.

The leap motion technology is still early at its infant stage and there’s still a lot of room for improvements. In the future, with the increased usage of the sensors, we might have whole rooms filled with many of these little sensors, or one big one, to detect our smallest gestures. It might also help us build a real life JARVIS from the movie Iron Man with its stunning motion controlled holograms. Furthermore, this technology might be applicable to the vast field of medical sciences and surgery. Imagine a surgeon being able to perform extremely precise remote operations with this tiny device, half a globe away. There truly is an endless possibility when it comes to the application of this device.

             All in all, the leap motion technology is a relatively new invention that still requires a lot of further development. It has shown us what it can do, as well as what its future might look like. With the speed at which technology is evolving right now, this versatile technology might mature a lot sooner than we can expect.

Raspberry Pi as the Future of Computing

By: Christoper Lychealdo

Technology is always developing and is always in a constant flux. One of the most interesting development and also one the most celebrated development is the size of electronic devices. The size of electronic devices gets smaller and thinner every time a new device is released to the market. Be it smartphones, laptops, televisions, or other electronic devices.

Starting from 2012, the Raspberry Pi Foundation from the United Kingdom has been producing palm-sized computers named Raspberry Pi. The purpose of Raspberry Pi itself is as a medium to teach the basics of computer science. All it needs to be operational is to plug the adaptor, get input devices (keyboard, mouse, etc.), and a display device. It also has a Wi-Fi, so you can surf the internet and get the tools you need to exploit the capabilities of Raspberry Pi, and you can even set it to connect remotely to your other devices, such as laptops and even smartphones through Wi-Fi, which allows you to have great portability as you can work anywhere as long as you have the input device and display device.

Even with lower specifications, the Raspberry Pi runs smoothly. It even has some basic tools for learning the basics of computer science. For developing purposes, there are lots of APIs available on the internet, mostly in Python, to help the developers’ work on the Raspberry Pi. For a student who knows little to none about hardware, I think Raspberry Pi is a good way for me to start knowing more about hardware, and even how to program the hardware. It also gives me the opportunity to learn a currently trending programming language that has been used in many fields in IT, and even used as a language to learn programming concepts, Python. Raspberry Pi has the Python programming language pre-installed and it can run other IDEs such as NetBeans and others to develop using other programming languages, such as Java and C/C++.

Although Raspberry Pi is intended to be a medium for learning the basics of computer science, it does not preclude the possibility to be used for other purposes. With the advantage of the smaller size than the average personal computers, you can possibly use the Raspberry Pi for other purposes, for example, as a central for processing data from other devices, for example, heat or motion sensors.

As a conclusion, Raspberry Pi is a part of the future of computing as it has the ability to help people learn the basics of computer science, as well as to be developed for other purposes. The size is also an important aspect as it allows the users/developers to have the portability in working with Raspberry Pi.

Pebble for “First-Time” Developers

By: Archel Taneka & Hansvin Tandi Sugata

Picture2 Have you ever thought about build and develop your own app? You might think that building and developing an app is difficult for beginners who can even barely shout “Hello World” to the console. But shame on that! We programmers are not bounded by limits, beginners can also build an app. To be honest, I am currently in the 1st semester. I am aware that my programming skills are not that high, but I am interested in building and developing app at the first time. So, why don’t you try it yourself? It’s not that hard, you just need extra time, hard work, and your friends will be caught by surprised that you can build your own app! Maybe you haven’t heard about Pebble? To put it simply, a smartwatch, and for instance you want to build your own interface? Pebble has its own SDK.

As I’ve said before, Pebble has its own SDK. Either you can download it here, or make it using CloudPebble (Click this link to proceed to CloudPebble). You must be wondering, which one is better? Well, the SDK can only be downloaded for Mac users, there are no supported Pebble SDK working on Windows. So, CloudPebble is highly recommended for Windows users. For me, I used CloudPebble. Just sign up an account and you can start working. Both SDK and CloudPebble uses C or Javascript for the programming language. So, that’s why I said that building this app is possible for beginners, because most of them learnt C language in the 1st year university. How about the Javascript? If you passed the course for C programming, then learn about Javascript is a piece of cake. All programming language share the same logic, except the syntax. So, you just need to learn how the syntax works for JS. Alright then, all you need to do is a unit of Pebble smartwatch, it costs about 3 million Rupiah. You can ask your friends to build the app together, so you can split the cost together also. You will also have to download the Pebble app to your smartphone for connecting between your smartphone and the watch through Bluetooth. If you cannot afford it, don’t worry, inside the SDK and CloudPebble itself, there is an emulator included.

Try to type a bunch of codes, and let’s see if you can put a simple hello world inside there. Don’t worry, there are tutorials about building your own app. Apart from that, explore it on your own. As you go on with your exploration, maybe you are stuck and you can’t go any further with the development. As the one of the developer, I can draw a conclusion on what the drawbacks that Pebble has, and maybe this is why you’re stuck as well. Here’s the thing that you should take note:


  1. Watchfaces, Stopwatch, and Timer utilities are only working if Pebble is connected to the smartphone.

Picture3Yes, it only works if we connect the watch to the smartphone with Bluetooth. For the watchfaces, there are 3 default watchfaces that you can use from the watch itself. What is watchface? Don’t worry, you will know it if you take the tutorial on how to build this app. Briefly, watchface is for the current ‘standby’ display on your watch. How is it connected to your smartphone? Before you build and run your code, you will choose on how will you run it, on emulator or phone? If you choose emulator, then your work will be displayed on the emulator. If you choose phone, then you will connect your watch to the smartphone.

  1. No WIFI available on the watch.

Picture4I think this is the most crucial part, am I right? The key into the digital world is WIFI. No life without WIFI, you feel like you’re an anti-social people without it, but believe it or not Pebble doesn’t have WIFI utilities. So, how does it connect to your social media such as Whatsapp, Line, and such? Well, the answer is back to your smartphone. Everything is connected through the Blutetooth, so when you got a message, it also pops up on your watch. Receiving a message doesn’t mean that replying or sending a message through the watch is possible, keep in mind that the watch is like a reminder, because we always keep our phone in our pocket and the watch attached to our arms.

Well, that’s pretty much it. For me, developing this kind of app is suitable for beginners, who love to make and develop an app and it’s doable. But once again, it depends on how you put your effort through the process of building and developing. Do it on your own pace, and remember: All great apps and programs, came from little things.

Happy coding!

Leap Motion Sensor as A Potential Input Device


By: Radityo Noeraldi Arief

The use of input devices such as the mouse, keyboard and joystick has always been necessary to control and send information to applications inside a computer. As technology is changing, input devices are also evolving to suit most of the changes. Before Sony created a physical keyboard especially for the PlayStation to have more control in the character input, people were using the Joystick to input characters using the On-Screen Keyboard. There are many methods and devices used for input, one of them is Leap Motion sensor. Leap motion is a sensor device that detects and inputs hand plus finger motion which can be used to perform certain actions in an application. This can be more of an interactive use because the user is not directly touching the device. There are pros and cons on this device but does this device have the potential to be a relevant input device and for development.

People may still prefer other input devices rather than Leap Motion as they are more easy to be used and people are already used to using these devices such as the mouse and keyboard because of their work. Leap Motion may be more interesting to people in the game development field especially in virtual reality and augmented reality. It is also possible to use it in robotics, by controlling a machine with hand movement like the scene from Iron Man 3 of Tony Stark controlling parts of his suit with his hand. If possible, it may help disabled people with problems using a mouse or other input devices. As follows, Leap Motion can be used in a lot of fields but the problem is how can it develop in that field and how to convince people to use it. Leap Motion may not be convinced to use because it has to be almost accurate in terms of hand movement in order for it to function properly which can take out people’s reason to use it. But if it is developed well, it is possible to create an application to suit the right kind of people and environment.

The Leap Motion device can be used to develop applications but there will be a problem in not the programming process but also from the device itself. Leap Motion has an API that can be used develop applications in many programming languages such as C++, C#, Java, Python, etc. which can be checked in Leap Motion’s developer website. As mentioned before, it is also possible to create games with Leap Motion using game engines such as Unity which will require the assets and SDK provided by the Leap Motion developers themselves. An issue for the application developer is how to handle the data that is input by the Leap Motion itself. If one wants to develop and application with Leap Motion, the person has to understand the advantages and weakness of the device itself as an input device. During my first time using Leap Motion, I found out that Leap Motion has a problem detecting hands that overlap with each other, and fingers other than the index and pinky.

In brief, Leap Motion can be used in a variety of fields but it depends in how it can be relevant and developed in that field. As a developer of Leap Motion, one has to understand the capabilities of Leap Motion as a device and its weaknesses so that later it would not be difficult to create an application using the API provided by the Leap Motion team.