act practicals for all with digitl signature

Upload: vishal-khanchandani-vish

Post on 23-Feb-2018

231 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/24/2019 Act Practicals for All With Digitl Signature

    1/70

    Advance Computing Technology

    1

    INDUS Institute of Technology and Engineering

    Practical 1

    Aim: To study and practice on Beowulf project

    Theory:

    What makes a cluster a Beowulf?

    Cluster is a widely-used term meaning independent computers combined into a unified system through softwareand networking. At the most fundamental level, when two or more computers are used together to solve aproblem, it is considered a cluster. Clusters are typically used for High Availability (HA) for greater reliability orHigh Performance Computing (HPC) to provide greater computational power than a single computer can provide.

    Beowulf Clusters are scalable performance clusters based on commodity hardware, on a private system network,with open source software (Linux) infrastructure. The designer can improve performance proportionally withadded machines. The commodity hardware can be any of a number of mass-market, stand-alone compute nodes assimple as two networked computers each running Linux and sharing a file system or as complex as 1024 nodeswith a high-speed, low-latency network.

    Class I clusters are built entirely using commodity hardware and software using standard technology such asSCSI, Ethernet, and IDE. They are typically less expensive than Class II clusters which may use specializedhardware to achieve higher performance.

    Common uses are traditional technical applications such as simulations, biotechnology, and petro-clustersfinancial market modeling, data mining and stream processing; and Internet servers for audio and games.

    Beowulf programs are usually written using languages such as C and FORTRAN. They use message passing toachieve parallel computations. See Beowulf History for more information on the development of the Beowulfarchitecture.

    One question that is commonly enough asked on the Beowulf list is "How hard is it to build or care for a

    beowulf?"

    Mind you, it is quite possible to go into beowulfery with no more than a limited understanding of networking, ahandful of machines (or better, a pocketful of money) and a willingness to learn, and over the years I've watchedand sometimes helped as many groups and individuals (including myself) in many places went from a state ofnear-total ignorance to a fair degree of expertise on little more than guts and effort.

    However, this sort of school is the school of hard (and expensive!) knocks; one ought to be able to do better andnot make the same mistakes and reinvent the same wheels over and over again, and this book is an effort tosmooth the way so that you can.

    One place that this question is often asked is in the context of trying to figure out the human costs of beowulf

    construction or maintenance, especially if your first cluster will be a big one and has to be right the first timeAfter all, building a cluster of more than 16 or so nodes is an increasingly serious proposition. It may well be thatbeowulfs are ten times cheaper than a piece of "big iron'' of equivalent power (per unit of aggregate computepower by some measure), but what if it costs ten times as much in human labor to build or run? What if it usesmore power or cooling? What if it needs more expensive physical infrastructure of any sort?

    These are all very valid concerns, especially in a shop with limited human resources or with little linux expertiseor limited space, cooling, power. Building a cluster with four nodes, eight nodes, perhaps even sixteen nodes canoften be done so cheaply that it seems ''free'' because the opportunity cost for the resources required are sominimal and the benefits so much greater than the costs. Building a cluster of 256 nodes without thinking hard

    Vishal ShahDigitally signed by Vishal ShahDN: cn=Vishal Shah

    Date: 2011.10.15 18:50:06 +05'30'

    http://www.beowulf.org/overview/history.htmlhttp://www.beowulf.org/overview/history.html
  • 7/24/2019 Act Practicals for All With Digitl Signature

    2/70

    Advance Computing Technology

    2

    INDUS Institute of Technology and Engineering

    about cost issues, infrastructure, and cost-benefit analysis is very likely to have a very sad outcome, the least ofwhich is that the person responsible will likely lose their job.

    If that person (who will be responsible) is you, then by all means read on.I cannot guarantee that the followingsections will keep you out of the unemployment line, but I'll do my best.

    Projects:

    Here is a partial list of other sites that are working on Beowulf Related Projects:

    Grendel Clemson University PVFS and system development

    Drexel Drexel University cyborg cluster

    Stone SouperComputer Oak Ridge National Lab (ORNL) a 126 node cluster at zero dollars per node

    Naegling CalTech's Beowulf Linux cluster

    Loki Los Alamos Beowulf cluster has an especially cool logo

    theHive Goddard Space Flight Center one of the large Beowulf cluster at Goddard

    AENEAS University of California, Irvine

    http://www.phy.duke.edu/~rgb/Beowulf/beowulf_book/beowulf_book/index.htmlhttp://ece.clemson.edu/parl/grendel.htmhttp://einstein.drexel.edu/~josephin/Cyborghttp://www.esd.ornl.gov/facilities/beowulfhttp://www.cacr.caltech.edu/beowulf/http://loki-www.lanl.gov/http://newton.gsfc.nasa.gov/thehivehttp://aeneas.ps.uci.edu/aeneashttp://aeneas.ps.uci.edu/aeneashttp://newton.gsfc.nasa.gov/thehivehttp://loki-www.lanl.gov/http://www.cacr.caltech.edu/beowulf/http://www.esd.ornl.gov/facilities/beowulfhttp://einstein.drexel.edu/~josephin/Cyborghttp://ece.clemson.edu/parl/grendel.htmhttp://www.phy.duke.edu/~rgb/Beowulf/beowulf_book/beowulf_book/index.html
  • 7/24/2019 Act Practicals for All With Digitl Signature

    3/70

    Advance Computing Technology

    3

    INDUS Institute of Technology and Engineering

    Practical 2

    Aim: To study Berkely NOW project.

    Thoery:

    The Berkeley Network of Workstations (NOW) project seeks to harness the power of clustered machinesconnected via high-speed switched networks. By leveraging commodity workstations and operating systems

    NOW can track industry performance increases. The key to NOW is the advent of the killer switch-based andhigh-bandwidth network. This technological evolution allows NOW to support a variety of disparate workloadsincluding parallel, sequential, and interactive jobs, as well as scalable web services, including the world'sfastesweb search engine, and commercial workloads, such as NOW-Sort, the world'sfastest disk-to-disk sort. On Apri30th, 1997, the NOW team achieved over 10 GFLOPS on theLINPACKbenchmark, propelling the NOW intothe top 200 fastest supercomputers in the world! Clickhere for more NOW news. The NOW Project issponsoredby a number of different contributers.

    The Berkeley NOW project is building system support for using a network of workstations (NOW) to act as adistributed supercomputer on a building-wide scale. Because of the volume production, commercial workstations

    today offer much better price/performance than the individual nodes of MPP's. In addition, switch-based networkssuch as ATM will provide cheap, high-bandwidth communication. This price/performance advantage is increasedif the NOW can be used for both the tasks traditionally run on workstations and these large programs.

    In conjunction with complementary research efforts in operating systems and communication architecture, wehope to demonstrate a practical 100 processor system in the next few years that delivers at the same time

    (1) better cost-performance for parallel applications than a massively parallel processing architecture (MPP) and

    (2) better performance for sequential applications than an individual workstation. This goal requires combiningelements of workstation and MPP technology into a single system. If this project is successful, this project has thepotential to redefine the high-end of the computing industry.

    To realize this project, we are conducting research and development into network interface hardware, fastcommunication protocols, distributed file systems, and distributed scheduling and job control.

    The NOW project is being conducted by the Computer Science Division at the University of California aBerkeley.

    The core hardware/software infrastructure for the project will include 100 SUN Ultrasparcs and 40 SUNSparcstations running Solaris, 35 Intel PC's running Windows NT or a PC UNIX variant, and between 500-1000disks, all connected by a Myrinet switched network. Most of this hardware/software has been donated by thecompanies involved. In addition, the Computer Science Division has been donated more than 300 HP

    workstations which we are also planning on integrating into the NOW project

    Using GLUnix

    Taking advantage of NOW functionality is straightforward. Simply ensure that /usr/now/bin is in your shell'sPATH, and /usr/now/man in the MANPATH. To start taking advantage of GLUnix functionality, log intonow.cs.berkeley.edu and start a glush shell. While the composition of the GLUnix parition may change over timewe make every effort to guarantee that now.cs is always running GLUnix. The glush shell runs most commandsremotely on the lightly loaded nodes in the cluster.

    http://www.hotbot.com/http://www.hotbot.com/http://now.cs.berkeley.edu/NowSort/index.htmlhttp://www.netlib.org/benchmark/to-get-lp-benchmarkhttp://now.cs.berkeley.edu/nowNews.htmlhttp://now.cs.berkeley.edu/nowSponsors.htmlhttp://http.cs.berkeley.edu/http://server.berkeley.edu/http://now.cs.berkeley.edu/Images/campanile_dusk.gifhttp://now.cs.berkeley.edu/Images/campanile_dusk.gifhttp://server.berkeley.edu/http://http.cs.berkeley.edu/http://now.cs.berkeley.edu/nowSponsors.htmlhttp://now.cs.berkeley.edu/nowNews.htmlhttp://www.netlib.org/benchmark/to-get-lp-benchmarkhttp://now.cs.berkeley.edu/NowSort/index.htmlhttp://www.hotbot.com/http://www.hotbot.com/
  • 7/24/2019 Act Practicals for All With Digitl Signature

    4/70

    Advance Computing Technology

    4

    INDUS Institute of Technology and Engineering

    Load balancing GLUnix shell scripts are available. Syntax is identical to the csh command language. Simplybegin your shell shell scripts with #!/usr/now/bin/glush. Note that you do not have to be running glush as yourinteractive shell in order to run load-balanced shell scripts.

    Utility Programs

    We have built a number of utility programs for GLUnix. All of these programs located in /usr/now/bin. Man

    pages are available for all of these programs, either by running man from a shell, or by clicking here.A briefdescription of each utility program follows:

    glush: The GLUnix shell is a modified version of tcsh. Most jobs submitted to the shell are load

    balanced among GLUnix machines. However, some jobs must be run locally since GLUnix

    does not provide completely transparent TTY support and since IO bandwidth to stdin, stdout,

    and stderr are limited by TCP bandwidth. The shell automatically runs a number of these jobs

    locally, however users may customize this list by adding programs to the glunix_runlocal shell

    variable. The variable indicates to glush those programs which should be run locally.

    glumake: A modified version of GNU's make program. A -j argument specifies the degree of parallelism

    for the make. The degree of parallelism defaults to the number of nodes available in the cluster.

    glurun: This program runs the specified program on the GLUnix cluster. For example glurun bigsim

    will run bigsim on the least loaded machine in the GLUnix cluster. You can run parallel

    program on the NOW by specifying the parameter -N where N is a number representing the

    degree of parallelism you wish. Thus glurun -5 bigsim will run bigsim on 5, least-loaded nodes.

    glustat: Prints the status of all machines in the GLUnix cluster.

    glups: Similar to Unix ps but only prints information about GLUnix processes.

    glukill: Sends an arbitrary signal (defaults to SIGTERM) to a specified GLUnix process.

    gluptime: Similar to Unix uptime, reporting on how long the system has been up and the current system

    load.

    GLUnix Implementation Status

    The following functionality is implemented in NOW-1:

    Remote

    Execution:

    Jobs can be started on any node in the GLUnix cluster. A single job may spawn multiple

    worker processes on different nodes in the system.

    Load

    Balancing:

    GLUnix maintains imprecise information on the load of each machine in the cluster. The

    system farms out jobs to the node which it considers least loaded at request time.

    Signal

    Propagation:

    A signal sent to a process is multiplexed to all worker processes comprising the GLUnix

    process.

    Coscheduling: Jobs spawned to multiple nodes can be gang scheduled to achieve better performance. The

    http://now.cs.berkeley.edu/man/html1/glunix.htmlhttp://now.cs.berkeley.edu/man/html1/glunix.html
  • 7/24/2019 Act Practicals for All With Digitl Signature

    5/70

    Advance Computing Technology

    5

    INDUS Institute of Technology and Engineering

    current coscheduling time quantum is 1 second.

    IO

    Redirection:

    Output to stdout or stderr are piped back to the startup node. Characters sent to stdin are

    multiplexed to all worker processes. Output redirection is limited by network bandwidth.

  • 7/24/2019 Act Practicals for All With Digitl Signature

    6/70

    Advance Computing Technology

    6

    INDUS Institute of Technology and Engineering

    Practical 3

    Aim: A Sample GLUnix Program

    Theory:

    Each program running under GLUnix has a startup process which runs in your shell and a number of child

    processes which run on remote nodes. There must be at least one child process, and may be up to one for eachnode currently running GLUnix. The startup process is responsible for routing signal information (for example, ifyou type ^Z or ^C) and input/output to the child processes. The child processes then make up the program itself. Ifthere is more than one child, this is a parallel program, else it is a sequential program.

    Here is the code and Makefile for a sample program which runs under GLUnix (use gmake with this Makefile)This routine provides the code for both the startup and child processes. The distinction between the two kinds ofprocesses is made using the Glib_AmIStartup() library call.

    Program:#include #include

    #include

    #include "glib/types.h"#include "glib.h"

    intmain(int argc, char **argv){

    int numNodes;VNN vnn;

    if(!Glib_Initialize()) {

    fprintf(stderr,"Glib_Initialize failed\n");exit(-1);

    }

    if (argc > 1) {numNodes = atoi(argv[1]);}else {numNodes = 2;}

    if (Glib_AmIStartup()) {

    /* Startup process runs here */printf("Startup is spawning %d children\n", numNodes);Glib_Spawnef(numNodes, GLIB_SPAWN_OUTPUT_VNN,

    argv[0], argv, environ);

    The Makefile for this program (if you call it test.c) is:CC = gccCFLAGS = -Wall -g

  • 7/24/2019 Act Practicals for All With Digitl Signature

    7/70

    Advance Computing Technology

    7

    INDUS Institute of Technology and Engineering

    TARGET = testSRCS = test.cLIBS = -lglunix -lam2 -lsocket -lnslMANS = test.1

    MANHOME = ../../man/man1BINHOME = ../../bin/sun4-solaris2.4-gamtcp

    LIBPATH = /usr/now/libINCLUDEPATH = /usr/now/include/

    ###############################################################

    LLIBPATH = $(addprefix -L,$(LIBPATH))RLIBPATH = $(addprefix -R,$(LIBPATH))INCPATH = $(addprefix -I,$(INCLUDEPATH))

    all: $(TARGET)

    $(TARGET): $(SRCS)gcc $(CFLAGS) -o $(TARGET) $(SRCS) $(RLIBPATH) \

    $(LLIBPATH) $(INCPATH) $(LIBS)

    clean:rm -f $(TARGET) core *~ *.o

    install: $(TARGET) installmancp $(TARGET) $(BINHOME)

    installman:

    cp $(MANS) $(MANHOME)

    Output from this program should look something like this (though the order of the output lines

    may vary):

    % ./testStartup is spawning 2 children1:***** I am a child process1:***** VNN: 11:***** Degree of program parallelism: 21:***** Total Nodes in system: 141:***** Doing Barrier0:***** I am a child process

    0:***** VNN: 00:***** Degree of program parallelism: 20:***** Total Nodes in system: 140:***** Child 0 is sleeping0:***** Doing Barrier1:***** Done with Barrier0:***** Done with Barrier%

  • 7/24/2019 Act Practicals for All With Digitl Signature

    8/70

    Advance Computing Technology

    8

    INDUS Institute of Technology and Engineering

    Practical 4

    Aim: To study and practice Alchemi grid framework.

    Theory:

    1. Introduction and Concepts

    This section gives you an introduction to how Alchemi implements the concept of grid computing and

    discusses concepts required for using Alchemi. Some key features of the framework are highlighted along

    the way.

    1.1. The Network is the Computer

    The idea of meta-computing - the use of a network of many independent computers as if they were one

    large parallel machine, or virtual supercomputer - is very compelling since it enables supercomputer-scale

    processing power to be had at a fraction of the cost of traditional supercomputers.

    While traditional virtual machines (e.g. clusters) have been designed for a small number of tightly coupled

    homogeneous resources, the exponential growth in Internet connectivity allows this concept to be applied

    on a much larger scale. This, coupled with the fact that desktop PCs in corporate and home environments

    are heavily underutilizedtypically only one-tenth of processing power is usedhas given rise to interest in

    harnessing the vast amounts of processing power that is available in the form of spare CPU cycles on

    Internet- or intranet-connected desktops. This new paradigm has been dubbed Grid Computing.

    1.2. How Alchemi Works

    There are four types of distributed components (nodes) involved in the construction of Alchemi grids and

    execution of grid applications: Manager, Executor, User & Cross-Platform Manager.

  • 7/24/2019 Act Practicals for All With Digitl Signature

    9/70

    Advance Computing Technology

    9

    INDUS Institute of Technology and Engineering

    A grid is created by installing Executors on each machine that is to be part of the grid and linking them to a

    central Manager component. The Windows installer setup that comes with the Alchemi distribution and minima

    configuration makes it very easy to set up a grid.

    An Executor can be configured to be dedicated (meaning the Manager initiates thread execution directly) o

    non-dedicated (meaning that thread execution is initiated by the Executor.) Non-dedicated Executors can work

    through firewalls and NAT servers since there is only one-way communication between the Executor and

    Manager. Dedicated Executors are more suited to an intranet environment and non-dedicated Executors are

    more suited to the Internet environment.

    Users can develop, execute and monitor grid applications using the .NET API and tools which are part of the

    Alchemi SDK. Alchemi offers a powerful grid thread programming model which makes it very easy to develop

    grid applications and a grid job model for grid-enabling legacy or non-.NET applications.

    An optional component (not shown) is the Cross Platform Manager web service which offers interoperability

    with custom non-.NET grid middleware.

    2. Installation, Configuration and Operation

    This section documents the installation, configuration and operation of the various parts of the framework for

    setting up Alchemi grids. The various components can be downloaded from:

    2.1. Common Requirements

  • 7/24/2019 Act Practicals for All With Digitl Signature

    10/70

    Advance Computing Technology

    10

    INDUS Institute of Technology and Engineering

    Microsoft .NET Framework 1.1

    2.2. Manager

    The Manager should be installed on a stable and reasonably capable machine. The Manager requires:

    SQL Server 2000 or MSDE 2000

    If using SQL Server, ensure that SQL Server authentication is enabled. Otherwise, follow these instructions to

    install and prepare MSDE 2000 for Alchemi. Make a note of the system administrator (sa) password in eithe

    case. [Note: SQL Server / MSDE do not necessarily need to be installed on the same machine as the

    Manager.]

    The Alchemi Manager can be installed in two modes

    As a normal Windows desktop application

    As a windows service. (supported only on Windows NT/2000/XP/2003)

    To install the manager as a windows application, use the Manager Setup installer. For service-mode

    installation use the Manager Service Setup. The configuration steps are the same for both modes. In case of

    the service-mode, the Alchemi Manager Service installed and configured to run automatically on Window

    start-up. After installation, the standard Windows service control manager can be used to control the

    service. Alternatively the Alchemi ManagerServiceController program can be used. The Manager service

    controller is a graphical interface, which is exactly similar to the normal Manager application.

    Install the Manager via the Manager installer. Use the sa password noted previously to install the database

    during the installation.

    Configuration & Operation

  • 7/24/2019 Act Practicals for All With Digitl Signature

    11/70

    Advance Computing Technology

    11

    INDUS Institute of Technology and Engineering

    The Manager can be run from the desktop or Start -> Programs -> Alchemi -> Manager -> Alchemi Manager

    The database configuration settings used during installation automatically appear when the Manager is first

    started.

    Click the "Start" button to start the Manager.

    When closed, the Manager is minimised to the system tray.

    Under service-mode operation, the GUI shown in fig. 3 is used to start / stop the Manager service. The service

    will continue to operate even after the service controller application exits.

    Manager Logging

    The manager logs its output and errors to a log file called alchemi-manager.log. This can be used to debug

    the manager / report errors / verify the manager operation. The log file is placed in the dat directory under

    the installation directory.

  • 7/24/2019 Act Practicals for All With Digitl Signature

    12/70

    Advance Computing Technology

    12

    INDUS Institute of Technology and Engineering

    2.3. Role-Based Security

    Every program connecting to the Manager must supply a valid username and password. Three default

    accounts are created during installation: executor (password: executor), user (password: user) and admin

    (password: admin) belonging to the 'Executors', 'Users' and 'Administrators' groups respectively.

    Users are administered via the 'Users' tab of the Alchemi Console (located in the Alchemi SDK). OnlyAdministrators have permissions to manage users; you must therefore initially log in with the default admin

    account.

    The Console lets you add users, modify their group membership and change passwords.

    The Users group (grp_id = 3) is meant for users executing grid applications.

    The Executors group (grp_id = 2) is meant for Alchemi Executors. By default, Executors attempting to connect to

    the Manager will use the executor account. If you do not wish Executors to connect anonymously, you can change

    the password for this account.

    You should change the default admin password for production use.

    2.4. Cross Platform Manager

    The Cross Platform Manager (XPManager) requires:

    Internet Information Services (IIS)

    ASP.NET

    Installation

    Install the XPManager web service via the Cross Platform Manager installer.

  • 7/24/2019 Act Practicals for All With Digitl Signature

    13/70

    Advance Computing Technology

    13

    INDUS Institute of Technology and Engineering

    Configuration

    If the XPManager is installed on a different machine that the Manager, or if the default port of the Manager is

    changed, the web service's configuration must be modified. The XPManager is configured via the ASP.NET

    Web.config file located in the installation directory (wwwroot\Alchemi\CrossPlatformManager by default):

    Operation

    The XPManager web service URL is of the format

    http://[host_name]/[installation_path]

    The default is therefore

    http://[host_name]/Alchemi/CrossPlatformManagerThe web service interfaces with the Manager. The Manager must therefore be running and started for the web

    service to work.

    2.5. Executor

    Installation

    The Alchemi Executor can be installed in two modes

    As a normal Windows desktop application

    As a windows service. (supported only on Windows NT/2000/XP/2003)

    To install the executor as a windows application, use the Executor Setup installer. For service-mode installationuse the Executor Service Setup. The configuration steps are the same for both modes. In case of the service-mode

    the Alchemi Executor Service installed and configured to run automatically on Windows start-up. After

    installation, the standard Windows service control manager can be used to control the service. Alternatively the

    Alchemi ExecutorServiceController program can be used. The Executor service controller is a graphical interface

    which looks very similar to the normal Executor application.

    Install the Executor via the Executor installer and follow the on-screen instructions.

    Configuration & Operation

    The Executor can be run from the desktop or Start -> Programs -> Alchemi -> Executor -> Alchemi Executor.

    The Executor is configured from the application itself.

    You need to configure 2 aspects of the Executor:

    The host and port of the Manager to connect to.

  • 7/24/2019 Act Practicals for All With Digitl Signature

    14/70

    Advance Computing Technology

    14

    INDUS Institute of Technology and Engineering

    Dedicated / non-dedicated execution. A non-dedicated Executor executes grid threads on a voluntary

    basis (it requests threads to execute from the Manager), while a dedicated Executor is always executing

    grid threads (it is directly provided grid threads to execute by the Manager). A non-dedicated Executor

    works behind firewalls.

    Click the "Connect" button to connect the Executor to the Manager.

    If the Executor is configured for non-dedicated execution, you can start executing by clicking the "Start

    Executing" button in the "Manage Execution" tab.

  • 7/24/2019 Act Practicals for All With Digitl Signature

    15/70

    Advance Computing Technology

    15

    INDUS Institute of Technology and Engineering

    The Executor only utilises idle CPU cycles on the machine and does not impact on the CPU usage of running

    programs. When closed, the Executor sits in the system tray. Other options such a interval of executo

    heartbeat (i.e time between pinging the Manager) can be configured via the options tab.

    Under service-mode operation, the GUI shown in fig. 8 is used to start / stop the Executor service. The

    service will continue to operate even after the service controller application exits.

    Executor Logging

    The executor logs its output and errors to a log file called alchemi-executor.log. This can be used to debug the

    executor / report errors / verify the executor operation. The log file is placed in the dat directory under the

    installation directory.

    2.6. Software Development Kit

  • 7/24/2019 Act Practicals for All With Digitl Signature

    16/70

    Advance Computing Technology

    16

    INDUS Institute of Technology and Engineering

    The SDK can be unzipped to a convenient location. It contains the following:

    Alchemi Console

    The Console (Alchemi.Console.exe) is a grid administration and monitoring tool. It is located in the bin directory.

    The 'Summary' table shows system statistics and a real-time graph of power availability and usage. The

    'Application's tab lets you monitor running applications. The 'Executors' tab provides information on

    Executors. The 'Users' tab lets you manage users.

    Alchemi.Core.dll

  • 7/24/2019 Act Practicals for All With Digitl Signature

    17/70

    Advance Computing Technology

    17

    INDUS Institute of Technology and Engineering

    Alchemi.Core.dll is a class library for creating grid applications to run on Alchemi grids. It is located in the bin

    directory. It must be referenced from by all your grid applications. (For more on developing grid applications

    please see section 3. Grid Programming).

    3. Grid Programming

    This section is a guide to developing Alchemi grid applications.

    3.1. Introduction to Grid Software

    For the purpose of grid application development, a grid can be viewed as an aggregation of multiple machine

    (each with one or more CPUs) abstracted to behave as one "virtual" machine with multiple CPUs. However, grid

    implementations differ in the way they implement this abstraction and one of the key differentiating features o

    Alchemi is the way it abstracts the grid, with the aim to make the process of developing grid software as easy as

    possible.

    Due to the nature of the grid environment (loosely coupled, heterogenous resources connected over an

    unreliable, high-latency network), grid applications have the following features:

    They can be parallelised into a number of independent computation units

    Work units have a high computation time vs. communication time ratio

    Alchemi supports two models for parallel application composition.

    Course-Grained Abstraction: File-Based Jobs

    Traditional grid implementations have only offered a high-level abstraction of the virtual machine, where the

    smallest unit of parallel execution is a process. The specification of a job to be executed on the grid at the most

    basic level consists of input files, output files and an executable (process). In this scenario, writing software to

    run on a grid involves dealing with files, an approach that can be complicated and inflexible.

    Fine-Grained Abstraction: Grid Threads

    On the other hand, the primary programming model supported by Alchemi offers a more low-level (and hence

    more powerful) abstraction of the underlying grid by providing a programming model that is object-oriented and

    that imitates traditional multi-threaded programming.

    The smallest unit of parallel execution in this case is a grid thread (.NET object), where a grid thread i

    programmatically analogous to a "normal" thread (without inter-thread communication).

    The grid application developer deals only with grid thread and grid application .NET objects, allowing him/her to

    concentrate on the application itself without worrying about the "plumbing" details. Furthermore, abstraction a

    this level allows the use of a elegant programming model with clean interfacing between remote and local code

    Note: Hereafter, applications and threads can be taken to mean grid applications and grid threads respectively

    unless stated otherwise.

  • 7/24/2019 Act Practicals for All With Digitl Signature

    18/70

    Advance Computing Technology

    18

    INDUS Institute of Technology and Engineering

    Grids Jobs vs. Grid Threads

    Support for execution of grid jobs (programmatically as well as declaratively) is present for the following

    reasons:

    Grid-enabling legacy or non-.NET applications

    Interoperability with grid middleware on other platforms (via a web services interface)

    The grid thread model is preferred due to its ease of use, power and flexibility and should be used for new

    applications, while the grid job model should be used for grid-enabling legacy/non-.NET applications or by non

    .NET middleware interoperating with Alchemi.

  • 7/24/2019 Act Practicals for All With Digitl Signature

    19/70

    Advance Computing Technology

    19

    INDUS Institute of Technology and Engineering

    Practical 5

    Aim: Develop Pi calculator in Alchemi

    Program:

    Manager:

    Plouffe_Bellard.cs

    namespace Alchemi.Examples.PiCalculator{

    public class Plouffe_Bellard{

    public Plouffe_Bellard() {}

    private static int mul_mod(int a, int b, int m){

    return (int) (((long) a * (long) b) % m);}

    /* return the inverse of x mod y */

    private static int inv_mod(int x, int y){

    int q,u,v,a,c,t;

    u=x;v=y;c=1;a=0;

    do{

    q=v/u;

    t=c;c=a-q*c;a=t;

    t=u;u=v-q*u;v=t;

    } while (u!=0);

    a=a%y;

    if (a

  • 7/24/2019 Act Practicals for All With Digitl Signature

    20/70

    Advance Computing Technology

    20

    INDUS Institute of Technology and Engineering

    {int r, aa;

    r=1;aa=a;

    while (true){

    if ((b & 1) != 0){

    r = mul_mod(r, aa, m);}

    b = b >> 1;

    if (b == 0){

    break;}

    aa = mul_mod(aa, aa, m);}

    return r;}

    /* return true if n is prime */private static bool is_prime(int n){

    if ((n % 2) == 0){

    return false;

    }

    int r = (int) Math.Sqrt(n);

    for (int i = 3; i

  • 7/24/2019 Act Practicals for All With Digitl Signature

    21/70

    Advance Computing Technology

    21

    INDUS Institute of Technology and Engineering

    return n;}

    public String CalculatePiDigits(int n){

    int av, vmax, num, den, s, t;

    int N = (int) ((n + 20) * Math.Log(10) / Math.Log(2));

    double sum = 0;

    for (int a = 3; a = a){

    if (kq2 == a){

  • 7/24/2019 Act Practicals for All With Digitl Signature

    22/70

    Advance Computing Technology

    22

    INDUS Institute of Technology and Engineering

    do{

    t = t / a;v++;

    } while ((t % a) == 0);}kq2 -= a;

    }

    den = mul_mod(den, t, av);kq2 += 2;

    if (v > 0){

    t = inv_mod(den, av);t = mul_mod(t, num, av);t = mul_mod(t, k, av);

    for (int i = v; i < vmax; i++){

    t = mul_mod(t, a, av);

    }

    s += t;

    if (s >= av){

    s -= av;}

    }

    }

    t = pow_mod(10, n - 1, av);s = mul_mod(s, t, av);sum = (sum + (double) s / (double) av) % 1.0;

    }

    int Result = (int) (sum * 1e9);

    String StringResult = String.Format("{0:D9}", Result);

    return StringResult;}

    public int DigitsReturned(){

    return 9;}

    }}

    PiCalcGridThread.cs

    using System;using System.Threading;

  • 7/24/2019 Act Practicals for All With Digitl Signature

    23/70

    Advance Computing Technology

    23

    INDUS Institute of Technology and Engineering

    using System.Reflection;using System.Text;using Alchemi.Core;using Alchemi.Core.Owner;

    namespace Alchemi.Examples.PiCalculator{

    [Serializable]

    public class PiCalcGridThread : GThread{

    private int _StartDigitNum;private int _NumDigits;private string _Result;

    public int StartDigitNum{

    get { return _StartDigitNum ; }}

    public int NumDigits

    {get { return _NumDigits; }

    }

    public string Result{

    get { return _Result; }}

    public PiCalcGridThread(int startDigitNum, int numDigits){

    _StartDigitNum = startDigitNum;

    _NumDigits = numDigits;}

    public override void Start(){

    StringBuilder temp = new StringBuilder();

    Plouffe_Bellard pb = new Plouffe_Bellard();for (int i = 0; i

  • 7/24/2019 Act Practicals for All With Digitl Signature

    24/70

    Advance Computing Technology

    24

    INDUS Institute of Technology and Engineering

    using System;using System.Reflection;using System.Text;using Alchemi.Core;using Alchemi.Core.Owner;using Alchemi.Core.Utility;using log4net;

    // Configure log4net using the .config file[assembly: log4net.Config.XmlConfigurator(Watch=true)]

    namespace Alchemi.Examples.PiCalculator{

    class PiCalculatorMain{

    // Create a logger for use in this classprivate static readonly ILog logger =

    LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

    static int NumThreads = 10;

    static int DigitsPerThread = 10;

    static int NumberOfDigits = NumThreads * DigitsPerThread;

    static DateTime StartTime;static GApplication App;

    static int th = 0;

    [STAThread]static void Main(){

    Console.WriteLine("[Pi Calculator Grid Application]\n--------------------------------\n");

    Console.WriteLine("Press to start ...");Console.ReadLine();

    Logger.LogHandler += new LogEventHandler(LogHandler);

    try{

    // get the number of digits from the userbool numberOfDigitsEntered = false;

    while (!numberOfDigitsEntered){try{

    NumberOfDigits = Int32.Parse(Utils.ValueFromConsole("Digits to calculate""100"));

    if (NumberOfDigits > 0){

    numberOfDigitsEntered = true;

  • 7/24/2019 Act Practicals for All With Digitl Signature

    25/70

    Advance Computing Technology

    25

    INDUS Institute of Technology and Engineering

    }}catch (Exception){

    Console.WriteLine("Invalid numeric value.");numberOfDigitsEntered = false;

    }}

    // get settings from userGConnection gc = GConnection.FromConsole("localhost", "9000", "user", "user");

    StartTiming();

    // create a new grid applicationApp = new GApplication(gc);App.ApplicationName = "PI Calculator - Alchemi sample";

    // add the module containing PiCalcGridThread to the application manifestApp.Manifest.Add(new ModuleDependency(typeof(PiCalculator.PiCalcGridThread).Module));

    NumThreads = (Int32)Math.Floor((double)NumberOfDigits DigitsPerThread);

    if (DigitsPerThread * NumThreads < NumberOfDigits){

    NumThreads++;}

    // create and add the required number of grid threadsfor (int i = 0; i < NumThreads; i++){

    int StartDigitNum = 1 + (i*DigitsPerThread);

    /// the number of digits for each thread/// Each thread will get DigitsPerThread digits except the last one/// which might get lessint DigitsForThisThread = Math.Min(DigitsPerThread

    NumberOfDigits - i * DigitsPerThread);

    Console.WriteLine("starting a thread to calculate the digits of pi from {0} to

    {1}",StartDigitNum,StartDigitNum + DigitsForThisThread - 1);

    PiCalcGridThread thread = new PiCalcGridThread(StartDigitNum,DigitsForThisThread);

    App.Threads.Add(thread);}

    // subcribe to events

  • 7/24/2019 Act Practicals for All With Digitl Signature

    26/70

    Advance Computing Technology

    26

    INDUS Institute of Technology and Engineering

    App.ThreadFinish += new GThreadFinish(ThreadFinished);App.ApplicationFinish += new GApplicationFinish(ApplicationFinished);

    // start the grid applicationApp.Start();

    logger.Debug("PiCalc started.");}

    catch (Exception e){

    Console.WriteLine("ERROR: {0}", e.StackTrace);}

    Console.ReadLine();}

    private static void LogHandler(object sender, LogEventArgs e){

    switch (e.Level){

    case LogLevel.Debug:string message = e.Source + ":" + e.Member + " - " +

    e.Message;logger.Debug(message,e.Exception);break;

    case LogLevel.Info:logger.Info(e.Message);break;

    case LogLevel.Error:logger.Error(e.Message,e.Exception);break;

    case LogLevel.Warn:

    logger.Warn(e.Message);break;

    }}

    static void StartTiming(){

    StartTime = DateTime.Now;}

    static void ThreadFinished(GThread thread){

    th++;Console.WriteLine("grid thread # {0} finished executing", thread.Id);

    // if (th > 1)// {// Console.WriteLine("For testing aborting threads beyond th=5");// try// {// Console.WriteLine("Aborting thread th=" + th);// thread.Abort();

  • 7/24/2019 Act Practicals for All With Digitl Signature

    27/70

    Advance Computing Technology

    27

    INDUS Institute of Technology and Engineering

    // Console.WriteLine("DONE Aborting thread th=" + th);// }// catch (Exception e)// {// Console.WriteLine(e.ToString());// }// }

    }

    static void ApplicationFinished(){

    StringBuilder result = new StringBuilder();for (int i=0; i

  • 7/24/2019 Act Practicals for All With Digitl Signature

    28/70

    Advance Computing Technology

    28

    INDUS Institute of Technology and Engineering

  • 7/24/2019 Act Practicals for All With Digitl Signature

    29/70

    Advance Computing Technology

    29

    INDUS Institute of Technology and Engineering

    Practical 6

    Aim: Develop application to generate prime number in Alchemi

    Program:

    PrimeNumberGenerator.csusing System;

    using System.Reflection;using Alchemi.Core;using Alchemi.Core.Owner;using log4net;

    namespace Tutorial{

    [Serializable]class PrimeNumberChecker : GThread{

    public readonly int Candidate;public int Factors = 0;

    public PrimeNumberChecker(int candidate){

    Candidate = candidate;}

    public override void Start(){

    // count the number of factors of the number from 1 to the number itselffor (int d=1; d

  • 7/24/2019 Act Practicals for All With Digitl Signature

    30/70

    Advance Computing Technology

    30

    INDUS Institute of Technology and Engineering

    break;case LogLevel.Info:

    logger.Info(e.Message);break;

    case LogLevel.Error:logger.Error(e.Message,e.Exception);break;

    case LogLevel.Warn:

    logger.Warn(e.Message);break;

    }}

    [STAThread]static void Main(string[] args){

    Logger.LogHandler += new LogEventHandler(LogHandler);

    Console.WriteLine("[PrimeNumber Checker Grid Application]\n--------------------------------\n");

    Console.Write("Enter a maximum limit for Prime Number checking [default=1000000]:");

    string input = Console.ReadLine();

    if (input!=null || input.Equals("")){

    try{

    max = Int32.Parse(input);}catch{}

    }

    App.ApplicationName = "Prime Number Generator - Alchemi sample";

    Console.WriteLine("Connecting to Alchemi Grid...");// initialise applicationInit();

    // create grid threads to check if some randomly generated large numbers are primeRandom rnd = new Random();for (int i=0; i

  • 7/24/2019 Act Practicals for All With Digitl Signature

    31/70

    Advance Computing Technology

    31

    INDUS Institute of Technology and Engineering

    // stop the applicationtry{

    App.Stop();}catch {}

    }

    private static void Init()

    {try{

    // get settings from userGConnection gc = GConnection.FromConsole("localhost", "9000", "user"

    "user");StartTime = DateTime.Now;App.Connection = gc;

    // grid thread needs toApp.Manifest.Add(new

    ModuleDependency(typeof(PrimeNumberChecker).Module));

    // subscribe to ThreadFinish eventApp.ThreadFinish += new GThreadFinish(App_ThreadFinish);App.ApplicationFinish += new GApplicationFinish(App_ApplicationFinish);

    }catch (Exception ex){

    Console.WriteLine("Error: "+ex.Message);logger.Error("ERROR: ",ex);

    }}

    private static void App_ThreadFinish(GThread thread){

    // cast the supplied GThread back to PrimeNumberCheckerPrimeNumberChecker pnc = (PrimeNumberChecker) thread;

    // check whether the candidate is prime or notbool prime = false;if (pnc.Factors == 2) prime = true;

    // display resultsConsole.WriteLine("{0} is prime? {1} ({2} factors)", pnc.Candidate, prime, pnc.Factors);

    if (prime)primesFound++;

    }

    private static void App_ApplicationFinish(){

    Console.WriteLine("Application finished. \nRandom primes found: {0}. Total time taken: {1}", primesFound, DateTime.Now - StartTime);

    }}

  • 7/24/2019 Act Practicals for All With Digitl Signature

    32/70

    Advance Computing Technology

    32

    INDUS Institute of Technology and Engineering

    }

    AssemblyInfo.csusing System.Reflection;using System.Runtime.CompilerServices;

    //// General Information about an assembly is controlled through the following// set of attributes. Change these attribute values to modify the information

    // associated with an assembly.//[assembly: AssemblyTitle("")][assembly: AssemblyDescription("")][assembly: AssemblyConfiguration("")][assembly: AssemblyCompany("")][assembly: AssemblyProduct("")][assembly: AssemblyCopyright("")][assembly: AssemblyTrademark("")][assembly: AssemblyCulture("")]

    //

    // Version information for an assembly consists of the following four values://// Major Version// Minor Version// Build Number// Revision//// You can specify all the values or you can default the Revision and Build Numbers// by using the '*' as shown below:

    [assembly: AssemblyVersion("1.0.*")]

    //// In order to sign your assembly you must specify a key to use. Refer to the// Microsoft .NET Framework documentation for more information on assembly signing.//// Use the attributes below to control which key is used for signing.//// Notes:// (*) If no key is specified, the assembly is not signed.// (*) KeyName refers to a key that has been installed in the Crypto Service// Provider (CSP) on your machine. KeyFile refers to a file which contains// a key.// (*) If the KeyFile and the KeyName values are both specified, the

    // following processing occurs:// (1) If the KeyName can be found in the CSP, that key is used.// (2) If the KeyName does not exist and the KeyFile does exist, the key// in the KeyFile is installed into the CSP and used.// (*) In order to create a KeyFile, you can use the sn.exe (Strong Name) utility.// When specifying the KeyFile, the location of the KeyFile should be// relative to the project output directory which is// %Project Directory%\obj\. For example, if your KeyFile is// located in the project directory, you would specify the AssemblyKeyFile// attribute as [assembly: AssemblyKeyFile("..\\..\\mykey.snk")]

  • 7/24/2019 Act Practicals for All With Digitl Signature

    33/70

    Advance Computing Technology

    33

    INDUS Institute of Technology and Engineering

    // (*) Delay Signing is an advanced option - see the Microsoft .NET Framework// documentation for more information on this.//[assembly: AssemblyDelaySign(false)][assembly: AssemblyKeyFile("")][assembly: AssemblyKeyName("")]

  • 7/24/2019 Act Practicals for All With Digitl Signature

    34/70

    Advance Computing Technology

    34

    INDUS Institute of Technology and Engineering

    Practical 7

    Aim: To study gridsim simulator.

    Theory:

    GridSim:a toolkit for the modeling and simulation of distributed resource management and scheduling for Gridcomputing

    INTRODUCTIONThe proliferation of the Internet and the availability of powerful computers and high-speed networksas

    low-cost commodity components are changing the way we do large-scale parallel and distributed computing. Theinterest in coupling geographically distributed (computational) resources is also growing for solving large-scaleproblems, leading to what is popularly called the Grid and peer-to-peer (P2P) computing networks. These enablesharing, selection and aggregation of suitable computational and data resources for solving large-scale dataintensive problems in science, engineering, and commerce. A generic view of Grid computing environment isshown in Figure. The Grid consists of four key layers of components: fabric, core middleware, user-levemiddleware, and applications [3]. The Grid fabric includes computers (low-end and high-end computers includingclusters), networks, scientific instruments, and their resource management systems. The core Grid middlewareprovides services that are essential for securely accessing remote resources uniformly and transparently. Theservices they provide include security and access management, remote job submission, storage, and resourceinformation. The user-level middleware provides higher-level tools such as resource brokers, applicationdevelopment and adaptive runtime environment. The Grid applications include those constructed using Gridlibraries or legacy applications that can be Grid enabled using user-level middleware tools. The user essentiallyinteracts with a resource broker that hides the complexities of Grid computing. The broker discovers resourcesthat the user can access using information services, negotiates for access costs using trading services, maps tasksto resources (scheduling), stages the application and data for processing (deployment), starts job execution, andfinally gathers the results. It is also responsible for monitoring and tracking application execution progress alongwith adapting to the changes in Grid runtime environment conditions and resource failures. The computingenvironments comprise heterogeneous resources (PCs, workstations, clusters, and supercomputers), fabricmanagement systems (single system image OS, queuing systems, etc.) and policies, and applications (scientificengineering, and commercial) with varied requirements (CPU, input/output (I/O), memory and/or networkintensive). The users: producers (also called resource owners) and consumers (also called end-users) havedifferent goals, objectives, strategies, and demand patterns. More importantly both resources and end-users aregeographically distributed with different time zones. In managing such complex Grid environments, traditionaapproaches to resource management that attempt to optimize system-wide measures of performance cannot beemployed. This is because traditional approaches use centralized policies that need complete state information anda common fabric management policy, or decentralized consensus based policy. In large-scale Grid environmentsit is impossible to define an acceptable system-wide performance matrix and commonfabric management policy. Apart from the centralized approach, two other approaches that are used in distributedresource management are: hierarchical and decentralized scheduling or a combination of them. We note thatsimilar heterogeneity and decentralization complexities exist in humaneconomies where market driven economicmodels have been used to successfully manage them

  • 7/24/2019 Act Practicals for All With Digitl Signature

    35/70

    Advance Computing Technology

    35

    INDUS Institute of Technology and Engineering

    we investigated the use of economics as a metaphor for management of resources in Grid computing

    environments. A Grid resource broker, called Nimrod-G [5], has been developed that performs scheduling ofparameter sweep, task-farming applications on geographically distributed resources. It supports deadline andbudget-based scheduling driven by market-based economic models. To meet users quality of servicerequirements, our broker dynamically leases Grid resources and services at runtime depending on their capabilitycost, and availability.Many scheduling experiments have been conducted on the execution of data-intensivescience applications such as molecular modeling for drug design under a few Grid scenarios (like 2 h deadline and10 machines for a single user). The ability to experiment with a large number of Grid scenarios was limited by thenumber of resources that were available in the WWG (World-Wide Grid) testbed [9]. Also, it was impossible tocreate a repeatable and controlled environment for experimentation and evaluation of scheduling strategies. Thisis because resources in the Grid span across multiple administrative domains, each with their own policies, usersand priorities.

    The researchers and students, investigating resource management and scheduling for large-scale

    distributed computing, need a simple framework for deterministic modeling and simulation of resources andapplications to evaluate scheduling strategies. For most who do not have access to ready-to-use testbedinfrastructures, building them is expensive and time consuming. Also, even for those who have access, the testbedsize is limited to a few resources and domains; and testing scheduling algorithms for scalability and adaptability,and evaluating scheduler performance for various applications and resource scenarios is harder and impossible totrace. To overcome these limitations, we provide a Java-based Grid simulation toolkit called GridSim. The Gridcomputing researchers and educators also recognized the importance and the need for such a toolkit for modelingand simulation environments [10]. It should be noted that this paper has a major orientation towards Gridhowever, we believe that our discussion and thoughts apply equally well to P2P systems since resourcemanagement and scheduling issues in both systems are quite similar. The GridSim toolkit supports modeling andsimulation of a wide range of heterogeneous resources, such as single or multiprocessors, shared and distributedmemory machines such as PCs, workstations, SMPs, and clusters with different capabilities and configurations. It

    can be used for modeling and simulation of application scheduling on various classes of parallel and distributedcomputing systems such as clusters [11], Grids [1], and P2P networks [2]. The resources in clusters are located ina single administrative domain and managed by a single entity, whereas in Grid and P2P systems, resources aregeographically distributed across multiple administrative domains with their own management policies and goalsAnother key difference between cluster and Grid/P2P systems arises from the way application scheduling isperformed. The schedulers in cluster systems focus on enhancing overall system performance and utility, as theyare responsible for the whole system. In contrast, schedulers in Grid/P2P systems called resource brokers, focuson enhancing performance of a specific application in such a way that its end-users requirements are met. TheGridSim toolkit provides facilities for the modeling and simulation of resources and network connectivity withdifferent capabilities, configurations, and domains. It supports primitives for application composition, informationservices for resource discovery, and interfaces for assigning application tasks to resources and managing their

  • 7/24/2019 Act Practicals for All With Digitl Signature

    36/70

    Advance Computing Technology

    36

    INDUS Institute of Technology and Engineering

    execution. These features can be used to simulate resource brokers or Grid schedulers for evaluating performanceof scheduling algorithms or heuristics. We have used the GridSim toolkit to create a resource broker thatsimulates Nimrod-G for design and evaluation of deadline and budget constrained scheduling algorithms withcost and time optimizations. The rest of this paper is organized as follows. Section 2 discusses related work withhighlights on unique features that distinguish our toolkit from other packages. The GridSim architecture andinternal components that make up GridSim simulations are discussed in Section 3. Section 4, discusses how tobuild GridSim based scheduling simulations. Sample results of simulation of a resource broker similar to Nimrod-G with a deadline and budget constrained cost-optimization scheduling algorithm is discussed in Section 5. The

    final section summarizes the paper along with suggestions for future work

    GridSim: GRID MODELING AND SIMULATION TOOLKIT

    The GridSim toolkit provides a comprehensive facility for simulation of different classes of heterogeneousresources, users, applications, resource brokers, and schedulers. It can be used tosimulate application schedulers for single or multiple administrative domain distributed computing systems suchas clusters and Grids. Application schedulers in the Grid environment, called resource brokers, perform resourcediscovery, selection, and aggregation of a diverse set of distributed resources for an individual user. This meansthat each user has his or her own private resource broker and hence it can be targeted to optimize for therequirements and objectives of its owner. In contrast, schedulers, managing resources such as clusters in a singleadministrative domain, have complete control over the policy used for allocation of resources. This means that all

    users need to submit their jobs to the central scheduler, which can be targeted to perform global optimization suchas higher system utilization and overall user satisfaction depending on resource allocation policy or optimize forhigh priority users.

    Features

    Salient features of the GridSim toolkit include the following. It allows modeling of heterogeneous types of resources. Resources can be modeled operating under space- or time-shared mode. Resource capability can be defined (in the form ofMIPS (Million Instructions Per Second) asper SPEC (Standard Performance Evaluation Corporation) benchmark). Resources can be located in any time zone.

    Weekends and holidays can be mapped depending on resources local time to model non-Grid(local) workload. Resources can be booked for advance reservation.

    Applications with different parallel application models can be simulated. Application tasks can be heterogeneous and they can be CPU or I/O intensive. There is no limit on the number of application jobs that can be submitted to a resource. Multiple user entities can submit tasks for execution simultaneously in the same resource, which may be timeshared or space-shared. This feature helps in building schedulers that can use different market-driven economicmodels for selecting services competitively. Network speed between resources can be specified.

    It supports simulation of both static and dynamic schedulers. Statistics of all or selected operations can be recorded and they can be analyzed using GridSim statistics analysis

    methods.

    System architecture

    We employed a layered and modular architecture for Grid simulation to leverage existing technologies andmanage them as separate components. A multi-layer architecture and abstraction for the development of GridSimplatform and its applications is shown in Figure 2. The first layer is concerned with the scalable Java interface andthe runtime machinery, called JVM (Java Virtual Machine), whose implementation is available for single andmultiprocessor systems including clusters . The second layer is concerned with a basic discrete-eventinfrastructure built using the interfaces provided by the first layer. One of the popular discrete-event infrastructure

  • 7/24/2019 Act Practicals for All With Digitl Signature

    37/70

    Advance Computing Technology

    37

    INDUS Institute of Technology and Engineering

    implementations available in Java is SimJava. Recently, a distributed implementation of SimJava was also madeavailable. The third layer is concerned with modeling and simulation of core Grid entities such as resources,information services,and so on application model, uniform access interface, and primitives application modelingand framework for creating higher level entities. The GridSim toolkit focuses on this layer that simulates systementities using the discrete-event services offered by the lower-level infrastructure. The fourth layer is concernedwith the simulation of resource aggregators called Grid resource brokers or schedulers. The final layer is focusedon application and resource modeling with different scenarios using the services provided by the two lower-levellayers for evaluating scheduling and resource management policies, heuristics, and algorithms. In this section, we

    briefly discuss the SimJava model for discrete events (a second-layer component) and focus mainly on theGridSim (the third layer) design and implementation. Resource broker simulation and performance evaluation arehighlighted inthe next two sections.

    SimJava [14] is a general purpose discrete event simulation package implemented in Java. Simulations in SimJavacontain a number of entities, each of which runs in parallel in its own thread. An entitys behaviour is encoded inJava using its body() method. Entities have access to a small number of simulation primitives: sim schedule() sends event objects to other entities via ports;

    sim hold() holds for some simulation time; sim wait() waits for an event object to arrive.

    These features help in constructing a network of active entities that communicate by sending andreceiving passive event objects efficiently. The sequential discrete event simulation algorithm, in SimJava, is asfollows. A central object Sim system maintains a timestamp ordered queue of future events. Initially all entitiesare created and their body() methods are put in run state. When an entity calls a simulation function, the Sim

    system object halts that entitys thread and places an event on the future queue to signify processing the function.When all entities have halted, Sim system pops the next event off the queue, advances the simulation timeaccordingly, and restarts entities as appropriate. This continues until no more events are generated. If the JVMsupports native threads, then all entities starting at exactly the same simulation time may run concurrently.

    GridSim entities

  • 7/24/2019 Act Practicals for All With Digitl Signature

    38/70

    Advance Computing Technology

    38

    INDUS Institute of Technology and Engineering

    GridSim supports entities for simulation of single processor and multiprocessor, heterogeneousresources that can be configured as time- or space-shared systems. It allows setting of the clock to different timezones to simulate geographic distribution of resources. It supports entities that simulate networks used forcommunication among resources. During simulation, GridSim creates a number of multi-threaded entities, each ofwhich runs in parallel in its own thread. An entitys behavior needs tobe simulated within its body() method, asdictated by SimJava.A simulation environment needs to abstract all the entities and their time-dependentinteractions in the real system. It needs to support the creation of user-defined time-dependent response functionsfor the interacting entities. The response function can be a function of the past, current, or both states of entities.

    GridSim based simulations contain entities for the users, brokers, resources, information service, statistics, andnetwork based I/O, as shown in Figure 3. The design and implementation issues of these GridSim entities arediscussed below.

    User. Each instance of the User entity represents a Grid user. Each user may differ from therest of users with respect to the following characteristics:

    types of job created, e.g. job execution time, number of parametric replications, etc.; scheduling optimization strategy, e.g. minimization of cost, time, or both;

    activity rate, e.g. how often it creates new job; time zone; and

    absolute deadline and budget; or D- and B-factors, deadline and budget relaxation parameters, measured in the range [0, 1]express deadline and budget affordability of the user relative to the application processingrequirements and available resources.

    Broker.Each user is connected to an instance of the Broker entity. Every job of a user isfirst submitted to its broker and the broker then schedules the parametric tasks according to the usersschedulingpolicy. Before scheduling the tasks, the broker dynamically gets a list of available resources from the globadirectory entity. Every broker tries to optimize the policy of its user and therefore, brokers are expected to faceextreme competition while gaining access to resources. The scheduling algorithms used by the brokers must be

    highly adaptable to the markets supply and demand situation

    Resource:

    Each instance of the Resource entity represents a Grid resource. Each resource may differ from the rest of theresources with respect to the following characteristics: number of processors; cost of processing; speed of processing;

    internal process scheduling policy, e.g. time-shared or space-shared; local load factor; and

    time zone.

  • 7/24/2019 Act Practicals for All With Digitl Signature

    39/70

    Advance Computing Technology

    39

    INDUS Institute of Technology and Engineering

    The resource speed and the job execution time can be defined in terms of the ratings of standardbenchmarks such as MIPS and SPEC. They can also be defined with respect to the standard machine. Uponobtaining the resource contact details from the Grid information service, brokers can query resources directly fortheir static and dynamic properties.

    Grid information service. Providing resource registration services and keeping track of a listof resources available in the Grid. The brokers can query this for resource contact, configuration, and statusinformation.

    Input and output:

    The flow of information among the GridSim entities happens via their Input and Output entities. Every networkedGridSim entity has I/O channels or ports, which are used for establishing a link between the entity and its own

  • 7/24/2019 Act Practicals for All With Digitl Signature

    40/70

    Advance Computing Technology

    40

    INDUS Institute of Technology and Engineering

    Input and Output entities. Note that the GridSim entity and its Input and Output entities are threaded entities, i.ethey have their own execution thread with body()method that handles events. The architecture for the entitycommunication model in GridSim is illustrated in Figure 4. The use of separate entities for input and outputenables a networked entity to model full duplex and multi-user parallel communications. The support for bufferedinput and output channels associated with every GridSim entity provides a simple mechanism for an entity tocommunicate with other entities and at the same time enables modeling of the necessary communications delaytransparently

    Application model:

    GridSim does not explicitly define any specific application model. It is up to the developers (of schedulers andresource brokers) to define them. We have experimented with a task-farming application model and we believethat other parallel application models such as process parallelism, Directed Acyclic Graphs (DAGs), divide andconquer etc., described in [21], can also be modeled and simulated using GridSim.

    In GridSim, each independent task may require varying processing time and input files size. Such tasks can becreated and their requirements are defined through Gridlet objects. A Gridlet is a package that contains all theinformation related to the job and its execution management details such as job length expressed in MIPS, diskI/O operations, the size of input and output files, and the job originator. These basic parameters help indetermining execution time, the time required to transport input and output files between users and remote

    resources, and returning the processed Gridlets back to the originator along with the results. The GridSim toolkitsupports a wide range of Gridlet management protocols and services that allow schedulers to map a Gridlet to aresource and manage it throughout the life cycle.

    Interaction protocols model:

    The protocols for interaction between GridSim entities are implemented using events. In GridSim, entities useevents for both service request and service delivery. The events can be raised by any entity to be deliveredimmediately or with specified delay to other entities or itself. The events that are originated from the same entityare called internal events and those originated from the external entities are called external events. Entities can

    distinguish these events based on the source identification associated with them. The GridSim protocols are usedfor defining entity services. Depending on the service protocols, the GridSim events can be further classified intosynchronous and asynchronous events. An event is called synchronous when the event source entity waits untilthe event destination entity performs all the actions associated with the event (i.e. the delivery of full service). Anevent is called asynchronous when the event source entity raises an event and continues with other activitieswithout waiting for its completion. When the destination entity receives such events or service requests, itresponds back with results by sending one or more events, which can then take appropriate actions. It should benoted that external events could be synchronous or asynchronous, but internal events need to be raised asasynchronous events only to avoid deadlocks.

    A complete set of entities in a typical GridSim simulation and the use of events for simulating interactionbetween them are shown in Figures 5 and 6. Figure 5 emphasizes the interaction between a resource entity thatsimulates time-shared scheduling and other entities. Figure 6 emphasizes the interaction between a resource entity

    that simulates a space-shared system and other entities. In this section we briefly discuss the use of events forsimulating Grid activities.

    The GridSim entities (user, broker, resource, information service, statistics, shutdown, and report writer)send events to other entities to signify the request for service, to deliver results, or to raise internal actions. Notethat GridSim implements core entities that simulate resource, information service, statistics, and shutdownservices. These services are used to simulate a user with application, a broker for scheduling, and an optionalreport writer for creating statistical reports at the end of a simulation. The event source and destination entitiesmust agree upon the protocols for service request and delivery. The protocols for interaction between the userdefined and core entities are pre-defined.

  • 7/24/2019 Act Practicals for All With Digitl Signature

    41/70

    Advance Computing Technology

    41

    INDUS Institute of Technology and Engineering

    When GridSim starts, the resource entities register themselves with the Grid Information Service (GIS)entity, by sending events. This resource registration process is similar to GRIS (Grid Resource InformationServer) registering with GIIS (Grid Index Information Server) in the Globus system, Depending on the user

    entitys request, the broker entity sends an event to the GIS entity, to signify a

    query for resource discovery. The GIS entity returns a list of registered resources and their contactdetails. Thebroker entity sends events to resources with a request for resource configuration andproperties. They respond withdynamic information such as resources cost, capability, availability, load, and other configuration parametersThese events involving the GIS entity are synchronous in nature.

    Depending on the resource selection and scheduling strategy, the broker entity places asynchronouseventsfor resource entities in order to dispatch Gridlets for executionthe broker need not wait for a resource tocomplete the assigned work. When the Gridlet processing is finished, the resource entity updates the Gridlet statusand processing time and sends it back to the broker by raising an event to signify its completion.

    The GridSim resources use internal events to simulate resource behavior and resource allocation. Theentity needs to bemodeled in such a way that it is able to receive all events meant for it. However, it is up to theentity to decide on the associated actions. For example, in time-shared resource simulations (see Figure 5) internaevents are scheduled to signify the completion time of a Gridlet, which has the smallest remaining processingtime requirement. Meanwhile, if an external event arrives, it changes the share resource availability for eachGridlet, which means the most recently scheduled event may

  • 7/24/2019 Act Practicals for All With Digitl Signature

    42/70

    Advance Computing Technology

    42

    INDUS Institute of Technology and Engineering

    not necessarily signify the completion of a Gridlet. The resource entity can discard such internalevents without processing.

    Resource modelsimulating multitasking and multiprocessingIn the GridSim toolkit, we can create Processing Elements (PEs) with different speeds (measuredin either MIPS or SPEC-like ratings). Then, one or more PEs can be put together to create a machine. Similarly,one or more machines can be put together to create a Grid resource. Thus, the resulting Grid resource can be asingle processor, shared memory multiprocessors (SMP), or a distributed memory cluster of computers. TheseGrid resources can simulate time- or space-shared scheduling depending on the allocation policy. A single PE orSMP-type Grid resource is typically managed by time-shared operating systems that use a round-robin schedulingpolicy for multitasking. The distributed memory multiprocessing systems (such as clusters) are managed byqueuing systems, called space-shared schedulers, that execute a Gridlet by running it on a dedicated PE (seeFigure 12) when allocated. The space-shared systems use resource allocation policies such as first-come-first-served (FCFS), back filling, shortest-job-first-served (SJFS), and so on. It should also be noted that resourceallocation within high-end SMPs could also be performed using the space-shared schedulers.

  • 7/24/2019 Act Practicals for All With Digitl Signature

    43/70

    Advance Computing Technology

    43

    INDUS Institute of Technology and Engineering

    Practical 8

    Aim:To study and practice Aneka Cloud computing software.

    Theory:

    Aneka

    Manjrasoft is focused on the creation of innovative software technologies for simplifying the development anddeployment of applications on private or public Clouds. Our product Aneka plays the role of Application Platformas a Service for Cloud Computing. Aneka supports various programming models involving Task ProgrammingThread Programming and MapReduce Programming and tools for rapid creation of applications and theirseamless deployment on private or public Clouds to distribute applications.

    Aneka technology primarily consists of two key components:

    1.

    SDK (Software Development Kit) containing application programming interfaces (APIs) and toolsessential for rapid development of applications. Aneka APIs supports three popular Cloud programmingmodels: Task, Thread, and MapReduce; and

  • 7/24/2019 Act Practicals for All With Digitl Signature

    44/70

    Advance Computing Technology

    44

    INDUS Institute of Technology and Engineering

    2.

    A Runtime Engine and Platform for managing deployment and execution of applications on private orpublic Clouds.

    One of the notable characteristics of Aneka PaaS is to support provisioning of private cloud resources ranging

    from desktops, clusters to virtual datacenters using VMWare, Citrix Zen server and public cloud resources such as

    Windows Azure, Amazon EC2, and GoGrid Cloud Service.

    The potential of Aneka as a Platform as a Service has been successfully harnessed by its users and customers in

    three various sectors including engineering, life science, education, and business intelligence

    Highlights of Aneka

    Technical Value

    Support of multiple programming and application environments

    Simultaneous support of multiple run-time environments

    Rapid deployment tools and framework Simplicity in developing applications on Cloud Dynamic Scalability

    Ability to harness multiple virtual and/or physical machines for accelerating application result

    Provisioning based on QoS/SLA

    Business Value

    Improved reliability Simplicity

    http://www.manjrasoft.com/images/aneka_cloud_computing_schema.png
  • 7/24/2019 Act Practicals for All With Digitl Signature

    45/70

    Advance Computing Technology

    45

    INDUS Institute of Technology and Engineering

    Faster time to value

    Operational Agility

    Definite application performance enhancement

    Optimizing the capital expenditure and operational expenditure

    APPLICATION

    Distributed 3D Rendering

    For 3D rendering, Aneka enables you to complete your jobs in a fraction of the usual time using existinghardware infrastructure without having to do any programming.

    http://www.manjrasoft.com/manjrasoft_distributed_rendering.html
  • 7/24/2019 Act Practicals for All With Digitl Signature

    46/70

    Advance Computing Technology

    46

    INDUS Institute of Technology and Engineering

    Build

    Aneka includes a Software Development Kit (SDK) which includes a combination of APIs andTools to enable you to express your application. Aneka also allows you to build different run-timeenvironments and build new applications.

    Accelerate

    Aneka supports Rapid Development and Deployment of Applications in Multiple Run-Timeenvironments. Aneka uses physical machines as much as possible to achieve maximum utilization inlocal environment. As demand increases, Aneka provisions VMs via private clouds (Xen orVMWare) or Public Clouds (Amazon EC2).

    Mange:

    Aneka Management includes a Graphical User Interface (GUI) and APIs to set-up, monitor, manage

    and maintain remote and global Aneka compute clouds. Aneka also has an accounting mechanism

    and manages priorities and scalability based on SLA/QoS which enables dynamic provisioning.

    Education and Training

    Help educate a new generation of students in the latest area of computing. Add Parallel, Distributedand Cloud Computing into your curriculum. We provide teaching tools, software and examples toget your program up and running quickly.

    Life Sciences

    In the life sciences sector Aneka can be used for drug design, medical imaging, modular & quantummechanics, genomic search etc. Using Aneka, simulations take hours instead of days to completeenabling you to improve your quality and precision of research by carrying out multiple simulationsand decrease your time to market by doing parallel simulations

  • 7/24/2019 Act Practicals for All With Digitl Signature

    47/70

    Advance Computing Technology

    47

    INDUS Institute of Technology and Engineering

    Practical 9

    Aim:Demonstrate Task model application on Aneka

    Program:

    MyTaskDemo.csusing System;using System.Threading;

    using System.Collections.Generic;

    using Aneka.Entity;using Aneka.Tasks;using Aneka.Security;using Aneka.Data.Entity;using Aneka.Security.Windows;

    namespace Aneka.Examples.TaskDemo{

    ///

    /// Class MyTask. Simple task function wrapping/// the Gaussian normal distribution. It computes/// the value of a given point./// [Serializable]public class MyTask : ITask{

    /// /// value where to calculate the/// Gaussian normal distribution./// private double x;

    /// /// Gets, sets the value where to calculate/// the Gaussian normal distribution./// public double X{ get { return this.x; } set { this.x = value; } }/// /// value where to calculate the/// Gaussian normal distribution./// private double result;/// /// Gets, sets the value where to calculate/// the Gaussian normal distribution./// public double Result{

    get { return this.result; }set { this.result = value; }

    }

    /// /// Creates an instance of MyTask.

  • 7/24/2019 Act Practicals for All With Digitl Signature

    48/70

    Advance Computing Technology

    48

    INDUS Institute of Technology and Engineering

    /// public MyTask() { }

    #region ITask Members/// /// Evaluate the Gaussian normal distribution/// for the given value of x.///

    public void Execute(){

    this.result = (1 / (Math.Sqrt(2 * Math.PI))) *Math.Exp(-(this.x * this.x) / 2);

    Console.WriteLine("{0} : {1}", this.X, this.Result);

    }#endregion

    }

    ///

    /// Class MyTaskDemo. Simple Driver application/// that shows how to create tasks and submit/// them to the grid, getting back the results/// and handle task resubmission along with the/// proper synchronization./// class MyTaskDemo{

    /// /// failed task counter/// private static int failed;

    /// /// completed task counter/// private static int completed;/// /// total number of tasks submitted/// private static int total;

    /// /// Dictionary containing sampled data///

    private static Dictionary samples;

    /// /// synchronization object/// private static object synchLock;/// /// sempahore used to wait for application/// termination///

  • 7/24/2019 Act Practicals for All With Digitl Signature

    49/70

    Advance Computing Technology

    49

    INDUS Institute of Technology and Engineering

    private static AutoResetEvent semaphore;/// /// grid application instance/// private static AnekaApplication app;

    /// /// boolean flag inidicating which task failure

    /// management strategy to use. If true the Log Only/// strategy will be applied, if false the Full Care/// strategy will be applied./// private static bool bLogOnly = false;

    /// /// Program entry point./// /// program argumentspublic static void Main(string[] args){

    if (args.Length < 1){

    Console.WriteLine("Usage TaskDemo [master-url] [username] [password]");return;

    }Console.WriteLine("Setting Up Grid Application..");app = Setup(args);

    // create task instances and wrap them// into AnekaTask instancesdouble step = 1.0;

    double min = -2.0;double max = 2.0;

    // initialize trace variables.total = (int) ((max - min) / step) + 1;completed = 0;failed = 0;samples = new Dictionary();

    // initialize synchronization data.synchLock = new object();semaphore = new AutoResetEvent(false);

    // attach events to the grid applicationAttachEvents(app);Console.WriteLine("Submitting {0} tasks...", total);

    while (min

  • 7/24/2019 Act Practicals for All With Digitl Signature

    50/70

    Advance Computing Technology

    50

    INDUS Institute of Technology and Engineering

    samples.Add(task.X, double.NaN);

    // wrap the task instance into a AnekaTaskAnekaTask gt = new AnekaTask(task);// submit the executionapp.ExecuteWorkUnit(gt);

    min += step;}Console.WriteLine("Waiting for termination...");semaphore.WaitOne();Console.WriteLine("Application finished. Press any key to quit.");Console.ReadLine();

    }

    #region Helper Methods/// /// AnekaApplication Setup helper method. Creates and/// configures the AnekaApplication instance.

    /// /// program argumentsprivate static AnekaApplication

    Setup(string[] args){

    Configuration conf = new Configuration(); // Configuration.GetConfiguration();

    string username = args.Length > 1 ? args[1] : null;string password = args.Length > 2 ? args[2] : string.Empty;

    // ensure that SingleSubmission is set to false

    // and that ResubmitMode to MANUAL.conf.SchedulerUri = new Uri(args[0]);conf.SingleSubmission = false;conf.ResubmitMode = ResubmitMode.MANUAL;if (username != null){

    conf.UserCredential = new UserCredentials(username, password);}conf.UseFileTransfer = false;

    AnekaApplication app =new AnekaApplication

    ("MyTaskDemo", conf);

    // ensure that SingleSubmission is set to falseif (args.Length == 1){

    bLogOnly = (args[0] == "LogOnly" ? true : false);}return app;

    }///

  • 7/24/2019 Act Practicals for All With Digitl Signature

    51/70

    Advance Computing Technology

    51

    INDUS Institute of Technology and Engineering

    /// Attaches the events to the given instance/// of the AnekaApplication class./// /// grid applicationprivate static void AttachEvents(

    AnekaApplication app){

    // registering with the WorkUnitFinished event

    app.WorkUnitFinished +=new EventHandler

    (OnWorkUnitFinished);// registering with the WorkUnitFinished eventapp.WorkUnitFailed +=

    new EventHandler(OnWorkUnitFailed);

    // registering with the ApplicationFinished eventapp.ApplicationFinished +=

    new EventHandler(OnApplicationFinished);}///

    /// Dumps the results to the console along with/// some information about the task failed and/// the tasks used./// private static void ShowResults(){

    // we do not need to lock anymore// the samples dictionary because the// asynchronous events are finished then// there is no risk of races.Console.WriteLine("Results");foreach (KeyValuePair sample in samples)

    {Console.WriteLine("{0}\t{1}", sample.Key,

    sample.Value);}Console.WriteLine("Tasks Failed: " + failed);string strategy = bLogOnly ? "Log Only" : "Full Care";Console.WriteLine("Strategy Used: " + strategy);

    }#endregion

    #region Event Handler Methods///

    /// Handles the WorkUnitFailed event./// /// event source/// event argumentspublic static void OnWorkUnitFailed

    (object sender, WorkUnitEventArgs args){

    if (bLogOnly == true){

    // Log Only strategy: we have to simply

  • 7/24/2019 Act Practicals for All With Digitl Signature

    52/70

    Advance Computing Technology

    52

    INDUS Institute of Technology and Engineering

    // record the failure and decrease the// number of total task by one unit.lock (synchLock){

    total = total - 1;// was this the last task?if (total == completed){

    app.StopExecution();}failed = failed + 1;

    }}else{

    // Full Care strategy: we have to resubmit// the task. We can do this only if we have// enough information to resubmit it otherwise// we switch to the LogOnly strategy for this// task.

    AnekaTask submitted = args.WorkUnit;if ((submitted != null) &&

    (submitted.UserTask != null)){

    MyTask task = submitted.UserTask as MyTask;AnekaTask gt = new AnekaTask(task);app.ExecuteWorkUnit(gt);

    }else{

    // oops we have to use Log Only.lock (synchLock)

    {total = total - 1;// was this the last task?if (total == completed){

    app.StopExecution();}failed = failed + 1;

    }

    }}

    }/// /// Handles the WorkUnitFinished event./// /// event source/// event argumentspublic static void OnWorkUnitFinished

    (object sender, WorkUnitEventArgs args){

    // unwrap the task data

  • 7/24/2019 Act Practicals for All With Digitl Signature

    53/70

    Advance Computing Technology

    53

    INDUS Institute of Technology and Engineering

    MyTask task = args.WorkUnit.UserTask as MyTask;lock (synchLock){

    // collect the resultsamples[task.X] = task.Result;// increment the countercompleted = completed + 1;// was this the last?

    Console.WriteLine("Completed so far {0}, Total to complete {1}", completed, total);if (total == completed){

    app.StopExecution();}

    }}/// /// Handles the ApplicationFinished event./// /// event source

    /// event argumentspublic static void

    OnApplicationFinished(object sender, ApplicationEventArgs args){

    // display resultsShowResults();// release the semaphore// in this way the main thread can terminatesemaphore.Set();

    }

    #endregion

    }}

  • 7/24/2019 Act Practicals for All With Digitl Signature

    54/70

    Advance Computing Technology

    54

    INDUS Institute of Technology and Engineering

    Practical 10

    Aim: Demonstrate Thread model application on Aneka

    Program:

    ThreadDemo.xml

    DebugAnyCPU8.0.507272.0{753ADD9B-FFF4-4EF4-85E0-D4CC2E68EC9A}ExePropertiesAneka.Samples.ThreadDemowarholizer

    truefullfalsebin\Debug\DEBUG;TRACEprompt4

    pdbonlytruebin\Release\

    TRACEprompt4

    Always

  • 7/24/2019 Act Practicals for All With Digitl Signature

    55/70

    Advance Computing Technology

    55

    INDUS Institute of Technology and Engineering

    Always

    {487235CA-2F8A-435E-84A2-B1008894062A}Aneka

    {D74DDE33-9C9D-4559-BF51-9050A6C8302E}Aneka.Threading

    {240A024C-8D08-4BBA-8594-14DFC1724180}Aneka.Util

    WarholApplication.cs

    #region Namespaces

    using System;using System.Collections.Generic; // Ilist class.using System.Text; // StringBuilder class.

    using System.IO; // IOException (IO Errors management)using System.Drawing; // Image and Bitmap classes.

    using Aneka.Entity; // Aneka Common APIs for all modelsusing Aneka.Threading; // Aneka Thread Modelusing System.Threading; // ThreadStart (AnekaThread initializati