what is visual programming language?

13
Programming Lego NXT without codes(with Microsoft Robotic Studio 1.5) Dewi Maya What is Visual Programming Language? The new LEGO MINDSTORMS NXT Services have been designed expressly for use in Visual Programming Language. These services are easy to configure and provide an extensible architecture so that third party sensors can be added at any time. The services provided include motors and sensors for the standard NXT devices as well as a few examples of sensors available from HiTechnic and MindSensors

Upload: others

Post on 12-Sep-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: What is Visual Programming Language?

Programming Lego NXT without codes(with Microsoft Robotic

Studio 1.5)

Dewi Maya

What is Visual Programming Language?

The new LEGO MINDSTORMS NXT Services have been designed expressly for use in Visual Programming

Language.

These services are easy to configure and provide an extensible architecture so that third party sensors can

be added at any time. The services provided include motors and sensors for the standard NXT devices as

well as a few examples of sensors available from HiTechnic and MindSensors

Page 2: What is Visual Programming Language?

Microsoft Visual Programming Language (VPL) is an application development environment designed on a

graphical data-flow-based programming model rather than control flow typically found in conventional

programming. Rather than series of imperative commands sequentially executed, a data-flow program is

more like a series of workers on an assembly line, who do their assigned task as the materials arrive. As a

result VPL is well suited to programming a variety of concurrent or distributed processing scenarios.

VPL is targeted for beginner programmers with a basic understanding of concepts like variables and logic.

However, VPL is not limited to novices. The compositional nature of the programming language may appeal

to more advanced programmers for rapid prototyping or code development. In addition, while its toolbox is

tailored developing robot applications, the underlying architecture is not limited to programming robots and

could be applied to other applications as well. As a result, VPL may appeal to a wide audience of users

including students, enthusiasts/hobbyists, as well as possibly web developers and professional

programmers.

Figure Sample Microsoft Visual Programming Language (VPL) diagram for simple bump-turn-go wander

behavior.

A Microsoft Visual Programming Language data-flow consists of a connected sequence of activities

represented as blocks with inputs and outputs that can be connected to other activity blocks.

Page 3: What is Visual Programming Language?

Figure The link arrows between the blocks represent messages that send data from one activity to another.

Activities can represent pre-built services, data-flow control, functions, or other code modules. The

resulting application is therefore often referred to as orchestration, the sequencing of separate processes.

Activities can also include compositions of other of activities. This makes it possible to compose activities

and reuse the composition as a building block. In this sense an application built in VPL is itself an activity.

Activity blocks typically include the activity’s name and borders that represent its connection points. An

activity block may also include graphics to illustrate the purpose of the activity as well as user interface

elements that may enable the user to enter values, assignments, or transformations for data used in an

activity.

Activities are connected through their connection pins. A connection pin on the left side of an activity

represents connection point for incoming/input messages and a pin on the right represents its connection

point for outgoing/output messages.

An activity receives messages containing data through its input connection pins. An activity’s input pins are

connection points to its predefined internal functions known as actions or handlers (which can be either as

functions provided by a service or nested data-flows).

An activity block activates and processes the incoming message data as soon it receives a valid incoming

message. All data sent to the activity is consumed by the activity. For data to be forwarded on through the

activity’s output, the receiving activity must replicate the data and put it on its output connection.

An activity may have multiple input connection pins and each with its own set of output connection pins.

Output connection pins can be one of two kinds: a result output or notification output (sometimes also

referred to as a event or publication output). Result outputs are displayed as rectangular connection pins

while publication outputs have round connection pins.

Page 4: What is Visual Programming Language?

A response output pin is used in situations where an outgoing message (data) is sent as the result of an

specific incoming action message. Notification pins can send information resulting from an incoming

message, but more typically fire a message as the a change their internal state. They can also generate

messages multiple times, whereas a result pin only sends out a single message on the receipt of an

incoming message. So notification output pins are used for sending message data without having to

repeatedly request or poll for the state of an activity.

Powerful physics-based simulation engine

Microsoft Robotics Studio targets a wide audience in an attempt to accelerate robotics development and

adoption. An important part of this effort is the simulation runtime. It was immediately obvious that PC and

Console gaming has paved the way when it comes to affordable, widely usable, robotics simulation. Games

rely on photo-realistic visualizations with advanced physics simulation running within real time constraints.

This was a perfect starting point for our effort.

We designed the simulation runtime to be used in a variety of advanced scenarios with high demands for

fidelity, visualization, and scaling. At the same time, a novice user with little to no coding experience can use

simulation; developing interesting applications in a game-like environment. Our integration of the AGEIA

PhysX Technologies enables us to leverage a very strong physics simulation product that is mature and

constantly evolving towards features that will be invaluable to robotics. The rendering engine is based on

Microsoft XNA Framework.

Challenges Posed by Robotics Development

Robotics Hardware Can be Expensive and Hard to Find

Modular robotics platforms, like the LEGO® MINDSTORMS™ and fischertechnik®, have made robotics

affordable to a wide consumer audience. These platforms are an excellent starting point for the educational

market and hobbyists. But if you want to scale up in terms of complexity of the robot or the number of

individual robots, cost prohibits most people from going further.

Page 5: What is Visual Programming Language?

Difficulty of Hardware Troubleshooting

Troubleshooting hardware, even wide-spread consumer hardware like a DVD player or a TV is difficult.

Consumer electronics just happen to be extremely reliable so most people don't have to worry about things

going wrong. When you're putting together a robot (especially a custom robot based on a modular platform

with off-the-shelf parts) significant skill, time and effort is expended "debugging" your physical setup.

Difficulty of Concurrent Use

Developing an advanced robot (like the vehicles that competed in the DARPA competitions) with a team of

people is becoming a common occurrence. One of the challenges is that often, the robot being developed is

expensive and there is only one. These two properties make it difficult to try things concurrently with others

and with no danger of destroying the robot. This forces people to try developing components in isolation,

making integration harder and introducing hard-to-find bugs.

Benefits of Simulation

Low Barrier to Entry

Simulation enables people with a personal computer to develop very interesting robots or robot swarms

with the main limiting factors becoming time and imagination. At the same time, it constrains them in ways

similar to physical robots so they can focus their effort in something that can be realized.

Staged Approach

Microsoft Robotics Studio approaches simulation in stages, allowing people to deal with complexity at the

right time. This means you can "debug" your simulated robot starting with basic primitives and requiring

only basic knowledge. It is extremely concise to add such a virtual robot in an environment, plus some

simple shapes to interact with. This means debugging, even in the simulation, is simpler.

Prototyping

Physical models for a robot and the simulation services that use them can be developed concurrently by

many people, and just like many software development communities, create a "platform" that many can use

and modify without worrying about breaking expensive, unique robots.

Education

Simulation can be an extremely useful instructional aid. You can choose what to focus on, build up

complexity, and control the environment. You can also introduce components that are purely virtual,

concepts that cannot be easily realized but still useful for learning.

Learning System

Another really interesting aspect of simulation is that it can be used while the robot is running, as a

predictive tool or supervised learning module. For quite some time, people have used simulation running

Page 6: What is Visual Programming Language?

concurrently with an active robot to try things out in the simulation world that is updated real-time with

sensory data. Then the simulation can tell them (probabilistically) if something is a good idea, "looking

ahead" in the various possibilities.

Simulation Drawbacks and Limitations

We are essentially trying to turn a hardware problem into a software one. However, developing software and

a physics model has its own challenges so we end up with a different set of challenges and limitations.

Usually this means that there is a sweet spot; a range of applications where simulation is very appropriate,

and then a range of applications or stages in development, where using the real robot is essential or easier.

As we improve the simulation runtime, the range where application is appropriate expands. The increase in

processing power plus the concurrent and distributed nature of the Microsoft Robotics Studio should help

address some of the issues.

Lack of Noisy Data

People involved in the large robotics challenges will tell you that you must spent serious time with the real

robot no matter how good the simulation was. This is partially because we have a lot of work left in making

simulation more usable and more realistic. But it is also because the real world is unpredictable and complex

with lots of noise being picked by sensors.

Incomplete and Inaccurate Models

A large number of effects in the real world are still unexplained or very hard to model. This means you may

not be able model everything accurately, especially in real time. For certain domains, like wheeled vehicles,

motion at low speeds is still a big challenge for simulation engines. Modeling sonar is another.

Lots of Time for Tuning

In the simulation runtime, it's actually very easy to get a robot in the virtual world running around

interacting with other objects. However, it still requires significant effort to tune the simulated hardware (we

call them entities) to behave like their real world counter parts. By using AGEIA PhysX Technology, you

already have a very good starting point but more effort is required into automated tools for tuning

simulation parameters.

Overview of Simulation Runtime

The simulation runtime is composed of the following components:

The Simulation Engine Service - is responsible for rendering entities and progressing the

simulation time for the physics engine. It tracks of the entire simulation world state and

provides the service/distributed front end to the simulation.

The Managed Physics Engine Wrapper - abstracts the user from the low level physics

engine API, provides a more concise, managed interface to the physics simulation.

The Native Physics Engine Library - enables hardware acceleration through AGEIA PhysX

Technology, which support hardware acceleration through the AGEIA PhysX Technology

processor (Available in PhysX Accelerator add-in cards for PCs).

Page 7: What is Visual Programming Language?

Entities - represent hardware and physical objects in the simulation world. A number of

entities come predefined with the Microsoft Robotics Studio and enable users to quickly

assemble them and build rich simulated robot platforms in various virtual environments.

You can choose to interact only with the managed physics engine API if you don't want any visualization.

However, it is strongly recommended that you always use the simulation engine service and define custom

entities that disable rendering. This greatly simplifies persistence of state, inspection and debugging of

simulation code.

The rendering engine uses the programmable pipeline of graphics accelerator cards, conforming to Directx9

Pixel/Vertex Shader standards. In the simulation tutorials we will talk how the simulation runtime makes it

easy to supply your own effects and have the engine manage loading, rendering, updates to effect state,

etc.

Simulation Programming

The Simulation Tutorials Overview(http://msdn2.microsoft.com/en-us/library/bb483075.aspx) show you

how to simulate a wheeled robot with a couple of onboard sensor devices. Two software components are

usually involved for simulating a physical component and its service:

An Entity - is the software component that interfaces with the physics engine and the

rendering engine. It exposes the appropriate high level interfaces to emulate hardware and

hide the specific use of physics APIs.

A Service - that uses the same types and operations as the service it is simulating and

provides the distributed front end to the entity, just like robotics services provide the front end

to robot hardware.

For a walk-through of interacting with the simulation runtime, please refer to the simulation tutorials.

Samples of simulation services that emulate sensors, actuators and higher level services can be found in

Samples folder of your Microsoft Robotics Studio installation directory.

Page 8: What is Visual Programming Language?

Simulation Screenshots

Modular robot base with differential drive, laser range finder and bumper array. The second image

is of the physics primitive view, which shows how the table and robot are approximated by solid shapes.

Page 9: What is Visual Programming Language?

Close-up of a multi shape environment entity and its physics model.

Simple Dashboard monitoring simulated laser in physics view.

Page 10: What is Visual Programming Language?

Complex file based mesh entity, with run time generated simplified convex mesh.

Page 11: What is Visual Programming Language?

Friends - Three MobileRobots Pioneer3DX robots with lasers plus the new LEGO® MINDSTORMS™ NXT.

There is something wrong here yes... See if you can spot it. The mesh for the LEGO robot has 172,000

triangles...

Page 12: What is Visual Programming Language?

Uneven ground surface. Example of using a height field entity for the ground, with random height

samples, every one meter. A different material can be supplied per sample allowing for advanced outdoor

simulations. The red dots in the first image are the visualization of the laser impact points from the laser

range finder entities.

Page 13: What is Visual Programming Language?

For more details and tutorials: www.microsoft.com/robotics

The tutorials of Microsoft Robotic Studio 1.5 is written in C# language of Visual Studio.

For free version of Visual Studio go to: www.microsoft.com/express