transformation and change

112
Transformation and Change 100 MINI PAPERS TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

Upload: others

Post on 11-Sep-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Transformation and Change

Transformation and Change100 MINI PAPERS

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

Page 2: Transformation and Change
Page 3: Transformation and Change

TRANSFORMATION AND CHANGE

100 MINI PAPERS

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

São Paulo

2014

Page 4: Transformation and Change

4

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

PRESENTATION

TRANSFORMATION AND CHANGE: THE DECISION IS YOURS – LEAD OR GIVE UPRodrigo Kede de Freitas Lima, General Manager IBM Brasil

We are in a moment of great changes. If you live in Brazil and

work with technology, you have more than enough reasons to

feel like you are on a roller coaster. Every roller coaster causes

multiple feelings – while some are afraid, others have fun, and

still others get goosebumps, but one thing is certain − most, at

the end of the trip, will have a feeling of “mission accomplished”

and of victory.

We may start by talking about Brazil. In 1985, after 20 years of

military dictatorship, we had a civilian president again (Tancredo

Neves), elected by the National Congress, who did not even

take office, since he passed away before his inauguration. The

new generations probably do not know

the details of the “Direct Elections Now”

movement, which showed the strength

that people united who fight for their

rights have. Between 1985 and 1990,

we went through multiple failed economic

plans and a presidential election – the

first in which the people went to vote

and chose their president. We were still

crawling in the reestablishment of the so-

called democracy, something completely

forgotten in almost 21 years of military

dictatorship. Today, looking back, it is

easier to understand the whole story, but

it is not possible to relearn democracy in

5 years. We made many mistakes and

achieved a few successes.

In 1989, we went to the ballots and elected a young President, who

promised to change the country, correct the wave of corruption

that raged our beloved Brazil. Little more than two years after

his election, the people once more went out to the streets to ask

for the impeachment of the then President Fernando Collor. His

Vice president took over and completed his mandate in 1995.

Those were years of much learning for the population, for the

politicians and the system. I usually say that this was an important

period of transformation of the country into a democracy (no

matter how rudimentary and problematic the period had been,

we were able to re-establish a democratic country).

Once again, we went to the polls and elected a new president.

Fernando Henrique Cardoso, the ex-Finance Minister of the

Itamar government and one of the fathers of the Real Plan, he

built his credibility with the entire country while he was a minister,

to run for office and win the elections. FHC, as he was known,

was responsible for a crucial period in the development of the

country. In his 2 terms, he was responsible for stabilizing the

economy and changing the country's scenario; he created the

fiscal responsibility law, sanitized the financial system, building

it as one of the most solid in the world, and he privatized many

sectors, such as telecommunications and energy. Prior to

the Real Plan, we lived in a world of

40% inflation per month; something

unimaginable nowadays – the prices in

the supermarkets changed many times

throughout the day (how can someone

live like this?). I consider the period of

FHC government as one of operational

efficiency and economical stability. Again,

we are talking about almost ten years of

much transformation.

In 2002, the people elected Luis Inácio

Lula da Silva, or just Lula. Lula certainly

surprised many people during his

administration; he was less radical than

what was expected by some sectors.

He honored contracts, maintained the

economic administration philosophy of the previous government

and placed in key positions people with great credibility, like

the president of the Central Bank - Henrique Meirelles (ex Bank

Boston Global CEO). Lula focused his efforts on solving the

problem of poverty in the country, his main goal. At the end

of his two terms, I believe there were, just like in the previous

governments, some landmarks which were fundamental for the

development of the country. The first one was what I call Social

Mobility – a democracy at some moment learns how to manage its

country and stabilize its economy. After that, it is normal that the

social pyramid begins to change. We had a middle class which

represented little more than 20% of the population and today

Page 5: Transformation and Change

5

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

we are talking about almost 60% of the population. Brazil also

benefitted from being one of the largest commodities producers

in the world and increased significantly the level of exports to

China, the second economy of the world, which became our

biggest trade partner, bringing a lot of wealth to the country.

Naturally, after the re-establishment of the democracy and the

stabilization of the economy. The country grew above the average

of previous decades thanks to a new class of consumers. Our

growth as a country was the result of the growth of the internal

consumption and China's success.

In 2010, Dilma was elected president. Since 2008, with the

world economic crisis, growth became harder. Just the domestic

consumption is not enough to make the country grow at the

required levels. China, even though it is still growing, grows

lesser and buys less. So, what now? The name of the game

for Brazil is efficiency and competitiveness. To achieve this,

we need a huge investment in infrastructure and education to

make qualified labor available. With a 5% unemployment rate,

how will we grow? We have to do more with the same amount of

labor, be more efficient and productive. Ports, airports, railroads,

technology, research and development in multiple areas, heavy

investment in basic education. We are in the middle of this battle.

We have already started this work as a country. There are many

criticisms, and the people, legitimately, took to the streets to

question and ask for solutions to their problems.

I am an eternal optimistic and I believe that, despite the mistakes

and the speed, we are destined to grow and become a developed

country at some point. We have 19% of the world's arable land

and 12% of the drinkable water. How much will this be worth in

2050, when 70% of the world's population will be living in cities?

We have to accelerate the investments and development. That is

the only way to have a developed country for our grandchildren.

We live, therefore, in a country in a huge transformation, and

each one of us has a role in this journey.

You must be thinking: what is the link of all this to technology

and IBM. In my opinion, absolutely everything. All this

transformation will only exist with the intense use of technology

by the companies, governments and institutions. We from

IBM invested 100 years working for the progress of society,

therefore, we can and we will have an even more fundamental

role in the transformation of Brazil.

For this reason, I would like to talk about another change that

is happening in the IT market today. The clients are more and

more buying outcomes, business solutions, specific to each

sector, instead of infrastructure. We have to think that now the

commoditization will not be just of products, but also of models.

The World walks very fast towards cloud, mobile, social business

and big data. The technology is leaving the back office and

going more and more to the front office. It is becoming less a

cost and more a source of revenue.

“Data” is already the new natural resource and companies and

institutions that do understand this will have a head start. In IBM’s

specific case, we are the only company in the marketplace that

has developed Cognitive Computing technology, which, in my

opinion, will change the way we live and work.

We are, therefore, living in a moment of intense transformation in

technology too. I am sure that in 5 years, we will have new players

and a some competitors will fade away. We need, increasingly,

to specialize in the new technology trends and not just in the

products – and this is valid for sales, for the technical team, for

delivery and even for the back office.

We say that every 30-40 years, the technology undergoes a

disruptive wave. This moment is now.

Brazil and Technology are both in a crucial moment of change.

A “special” combination. As I said, there are people that like

roller coasters (like me) and others that don't.

The journey is long, but the game is won every day.

Lead or give up.

Page 6: Transformation and Change

Copyright © 2014 IBM Brasil — Indústria, Máquinas e Serviços Ltda.

All other trademarks referenced herein are the properties of their respective owners.

Organization: Technology Leadership Council Brazil.

Coordinators of the book: Argemiro José de Lima and Maria Carolina Azevedo.

Graphic Design: www.arbeitcomunicacao.com.br

Dados Internacionais de Catalogação na Publicação (CIP) (Câmara Brasileira do Livro, SP, Brasil)

Transformation and change [e-book] : 100 mini papers. -- São Paulo : Arbeit Factory Editora e Comunicação, 2014.�����������������.E���3')

Vários autores. Vários tradutores. 978-85-99220-05-4 978-85-99220-04-7 (ed. original)

1. Computação 2. Engenharia de software 3. IBM - Computadores 4. Liderança 5. Mudança 6. Tecnologia da informação.

14-11614 CDD-004

Índices para catálogo sistemático:

1. Transformação e mudança : Liderança : Tecnologia da informação 004

Page 7: Transformation and Change

Hybrid computers, the next frontier of computing ....................................................................................................... 10

How to read in fifty years what was written today? ...................................................................................................... 11

The Lean way of thinking ............................................................................................................................................. 12

So do you want to work with IT architecture? ............................................................................................................... 13

Quantum Computing ................................................................................................................................................... 14

The challenge of legfacy systems modernization ........................................................................................................ 15

Technology for Smart Cities ......................................................................................................................................... 16

Everything as a Service ............................................................................................................................................... 17

The Fog and the Frog .................................................................................................................................................. 18

Best Practices in Requirements Elicitation .................................................................................................................. 19

The man who saw the shape of things ......................................................................................................................... 20

Software Metrics.......................................................................................................................................................... 21

Competency-based Management: It’s KSA time ........................................................................................................ 22

Daily Scrum for everyone! ........................................................................................................................................... 23

How to please the customer who contracts services? ................................................................................................ 24

Special IBM Centenary: SAGE, a cradle for innovation ............................................................................................... 25

Knowledge Integration: the consultant’s challenge .................................................................................................... 26

Special IBM Centenary: IBM RAMAC: the beginning of a new era in commercial computing ................................... 27

The Evolution of the IT Services Delivery Model .......................................................................................................... 28

Special IBM Centenary: IBM 1401, When Times Were Different... .............................................................................. 29

The Internet of Things .................................................................................................................................................. 30

Special IBM Centenary: The Space Program and Information Technology ................................................................ 31

Efficient collaboration in a smart planet ...................................................................................................................... 32

Special IBM Centenary: Seeing the world better ........................................................................................................ 33

We live in a world increasingly instrumented ............................................................................................................... 34

Special IBM Centenary: Elementary, my dear Watson! ............................................................................................... 35

Multi-core Revolution Impacts in Software Developing ............................................................................................... 36

Special IBM Centenary: The IBM and the Internet ...................................................................................................... 37

Governance, Risk and Conformity .............................................................................................................................. 38

Special IBM Centenary: IBM Tape: Breaking Barriers in Data Storage ....................................................................... 39

The New Millennium Bug? ........................................................................................................................................... 40

Maintenance of systems at the speed of business ..................................................................................................... 41

Scalability and Management in Cloud Computing ...................................................................................................... 42

The evolution of the Web in business management .................................................................................................... 43

Financial agility in IT .................................................................................................................................................... 44

IT Cost Management ................................................................................................................................................... 45

FCoE, integration of LAN and SAN networks .............................................................................................................. 46

Power, a lot of processing power ................................................................................................................................ 47

CONTENTS

Page 8: Transformation and Change

The Power of Social Technology .................................................................................................................................. 48

Girls and Technology .................................................................................................................................................. 49

About Prophets and Crystal Balls ................................................................................................................................ 50

Smart cities: the work moves so that life goes on ........................................................................................................ 51

Special Technology for Social Inclusion ...................................................................................................................... 52

Agile: Are you ready? .................................................................................................................................................. 53

The Theory of Multiple Intelligences and Jobs in IT ..................................................................................................... 54

Analytics at your fingertips .......................................................................................................................................... 55

The RCA process importance ..................................................................................................................................... 56

Can I see the data? ..................................................................................................................................................... 57

Learn while playing ..................................................................................................................................................... 58

Audio processing in graphics cards ............................................................................................................................ 59

Unicode ♥ דוקינו ☻ Уникод ♫ وكينوي� ......................................................................................................................... 60

The Truth is a Continuous Path .................................................................................................................................... 61

Everything (that matters) in time .................................................................................................................................. 62

Cloud computing and embedded systems ................................................................................................................. 63

Nanotechnology-How does that change our lives? .................................................................................................... 64

IT with Sustainability and Efficiency ............................................................................................................................ 65

The strategy and its operationalization ........................................................................................................................ 66

The evolution of NAS ................................................................................................................................................... 67

Go to the Cloud or not? ............................................................................................................................................... 68

Profession: Business Architect .................................................................................................................................... 69

Four Hours? ................................................................................................................................................................. 70

If you put your reputation on the window, will it worth more than $ 1.00? .................................................................... 71

What is information security? ...................................................................................................................................... 72

The mathematics of chance ........................................................................................................................................ 73

The origin of the Logical Data Warehouse (LDW)........................................................................................................ 74

Storage & Fractais ....................................................................................................................................................... 75

Social Business versus Social Business Model .......................................................................................................... 76

Scientific Method and Work ........................................................................................................................................ 77

What is the size of the link? .......................................................................................................................................... 78

NoSQL Databases ...................................................................................................................................................... 79

The Challenges of the Internet of Things ..................................................................................................................... 80

Bring your mobile device ............................................................................................................................................ 81

The sky is the limit for intelligent automation ................................................................................................................ 82

Security Intelligence, a new weapon against cyber crime .......................................................................................... 83

Technology Transforming Smart Cities ........................................................................................................................ 84

Crowdsourcing: The power of the crowd ..................................................................................................................... 85

TOGAF - What is it and why? ....................................................................................................................................... 86

Reveal the client that is behind the data ...................................................................................................................... 87

Page 9: Transformation and Change

Singularity: are you ready to live forever? .................................................................................................................... 88

Now I can Tweet .......................................................................................................................................................... 89

The new consumer ...................................................................................................................................................... 90

Transforming risks into business opportunities ............................................................................................................ 91

QoS in broadband access networks ........................................................................................................................... 92

Do machines feel? ....................................................................................................................................................... 93

Understanding AT and IT ............................................................................................................................................ 94

“Graphene’s Valley” and Technology Revolution .......................................................................................................... 95

The time doesn’t stop, but it can be best enjoyed… ................................................................................................... 96

Ontologies and the Semantic Web .............................................................................................................................. 97

Mass customization: obtaining a competitive advantage ........................................................................................... 98

Software Defined Network – The Future of the Networks ............................................................................................ 99

A Privileged View of the Earth ...................................................................................................................................... 100

Smile, you can be in the clouds................................................................................................................................... 101

IBM Mainframe - 50 Years of Technological Leadership and Transformation ............................................................. 102

Interoperability in the Internet of Things ...................................................................................................................... 103

Agile Project Management or PMBOK®? ..................................................................................................................... 104

Blood, Sweat and Web: how the World Wide Web was created ................................................................................. 105

Direct Memory Access: Vulnerability by design? ........................................................................................................ 106

Big Data and the Nexus of Forces ............................................................................................................................... 107

Demystifying Virtual Capacity, Part I ........................................................................................................................... 108

Demystifying Virtual Capacity, Part II .......................................................................................................................... 109

Closing Remarks and Acknowledgments ................................................................................................................... 110

Page 10: Transformation and Change

10

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

For over 20 years the IT industry

has managed to maintain valid

Moore’s Law, doubling the pro-

cessing power of chips every 18

months, but lately it has become

a great challenge to maintain

such a pace, which can pose

a threat to market, followed by

demand of more power.

The current chip architecture reached its physical limitation,

considering the performance curve versus the dissipation of

heat generated and the energy needed for its operation. It is no

longer possible to continue delivering more capacity without a

change of concept and architecture. Some solutions have been

tried, such as the manufacture of multicore chips, but that still

could not solve this impasse. On the other hand, the IT market

continues to need more capacity to meet the changing business

demands through increasingly complex applications, which

require more powerful computers ever.

The industry is seeking alternatives to address this issue. One

approach is to increase the level of parallelism between the

various processing cores on the same chip, which require new

programming concepts and redesign of existing systems so that

they can exploit this architecture processor. Another alternative

is to implement a new concept of computers, based on a hybrid

processor architecture.

Hybrid computers are composed of different types of processors,

tightly coupled under an integrated management and control

system, which enables the processing of complex and varying

loads. Intel and AMD, for example, are working on multicore

chips where the processing cores are distinct from each other,

to enable performance gains without hitting the ceiling heat

dissipation. However, there is still no forecast about the release

of these new chips to market.

IBM is working on a new server platform z / Series, which contain

processors from their traditional families (Mainframe, POWER7 and

x86) arranged in a single computing platform, centrally managed

and integrated manner. In the recent past IBM released a Z/Series

server integrated with Cell processors to meet a specific need of

Hoplon, the Brazilian company that operates in the game market.

This experience was very successful and enabled the advance

towards the concept of hybrid server. With this new platform,

which is in final stages of development, IBM intends to provide

a solution for high-performance and scalability, able to meet

demands for solutions that require processing power with mixed

characteristics between traditional commercial applications and

compute-intensive applications (High Performance Computing).

Hybrid computers are intended to overcome the limitations

imposed by current architectures and also solve the problems

caused by the strong dependency between the applications and

the computing platform for which they were originally designed.

This new type of computer functions, as if there are several

logical virtualized servers on a single physical server, with a

layer of integrated management, that is able to distribute parts

of an application to the processor that is more conducive to him.

It provides the user the facilities and benefits of a physically

centralized but logically distributed, addressing the current

challenges of decentralized world relating to integration of

applications, security, monitoring, load distribution and accounting

of resource use, among other platforms.

Simplifying IT, reducing the number of servers installed (and their

requirements for space, power and cooling), larger capacity

management of end-to-end and consequently, lower total cost

of ownership. These are the value propositions of the hybrid

architectures.

We are on the verge of a new computing platform, which could

represent a paradigm shift in the IT industry and enable new

business solutions, opening horizons for business and society.

For further information:http://www.redbooks.ibm.com/abstracts/redp4409.html

HYBRID COMPUTERS, THE NEXT FRONTIER OF COMPUTING Daniel Raisch

Page 11: Transformation and Change

11

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

HOW TO READ IN FIFTY YEARS WHAT WAS WRITTEN TODAY?Roberto F. Salomon

Only in very recent time we have started using files on electronic

media to store documents. Besides the paper, we already use

many other media for our documents as wood, stone, clay

and wax. When they used these hard media under the above

mentioned brackets, our ancestors made them inseparable from

the content itself.

With the arrival of electronic media, for the first time we separated

a document from its contents. Thus, the documents have become

“virtual”, and stored in digital files generated by any application.

Thanks to digital media, a copy of a document is identical to its

original. Would be the best of all worlds if there were no question

of recovery and subsequent reading of these documents.

The analogy worked well to use software for producing documents:

a sheet of paper displayed on the screen in the same position it

would be a sheet in a typewriter.

However until recently, it was not possible to have a proper

discussion about the storage format of these documents, resulting

in compatibility issues with which we live today. The linking of

formats to software that created them became a barrier to adoption

of new technologies and solutions.

The issue caused due to the lack of standardization in document

storage is only the most visible part of the problem. The lack of

standardization in communication between software components

has accumulated along with the large number of suppliers in

the market. While the adoption of different solutions that support

heterogenious open and published standards makes economic

sense for the private sector, for public sector this adoption of

a standard is vital for the preservation of the state information.

The concern with the use of open standards in official documents

led the European Union to publish a definition of what is an open

standard. There are several perceptions, but all agree that an

open standard should:

• be maintained by a nonprofit organization, through an open

process of decision:

• be published and accessible without cost, or merely nominal

cost;

• ensure free access, without the payment of royalties, for any

intellectual property associated to the standard.

Several patterns are suited to this common definition, including

ODF - OpenDocument Format, which defines the storage format

for electronic textual documents.

In Brazil, the Federal Government has already recognized the

importance of adopting open standards that enable integration

between their bodies and the other departments of government.

The edition of the e-PING — Interoperability Standards for Electronic

Government shows that the Federal Government has considered it

necessary to establish which patterns will be used to communicate

with society. This definition should be independent of any economic

pressures from interest groups. Initiatives such as the e-PING

are strategic and necessary. There is now a consensus about its

importance, demonstrated by events such as the “Government

Interoperability Framework Global Meeting 2010,” promoted by

the UNDP (United Nations Development) held in Rio in May 2010.

Policymakers need to be clear that in a world increasingly digital

the state can not avoid establishing the use of open standards. This

would seriously compromise the ability of collaboration between

government agencies and between them and civil society, creating

obstacles to preserving investments and memory of the nation.

For further information:http://www.odfalliance.org

Page 12: Transformation and Change

12

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE LEAN WAY OF THINKINGDiego Augusto Rodrigues Gomes

We live in a constant change of thoughts in several spheres of

knowledge area. For economic reason, many areas in a company

try to decrease their expenses. In the natural environment, we

have treaties between countries for the reduction of gases that

affect the global warming. Beyond this, we are trying to optimize in

regards to the economy of water usage, eletricity and the reduction

of polution. Often we are also creating inteligent machines for

domestic use. What is common about all of this? The effort is to

reduce the use of resources and to find a better way of using it.

With a structural base in the managing principles adapted from

the Toyota System of mass production, there was a term created

as ‘”lean’’ to describe the systems of production which tried to

provide higher value to the clients, at a much lower cost, by the

improvement of flows in the process.

Whenever we eliminate waste in all flows that generate value,

processes are created that demand less effort, less space, less

capital and that require less time for the creation of products and

services. All this with less number of defects and a better quality,

whenever compared to the traditional standards.

The five extraordinary points of thoughts about Lean, which

reassure that it is indispensable are:

1. Define what is best for the client and satisfy him;

2. Define the value flow on a way which is possible to eliminate

processes that do not add any value to the final product

(eliminate waste);

3. Reassure flow within the processes, creating a flow of

continous production, quickly attending the needs of clients

(flexibility);

4. Do not push the product to a customer but see what really

suits his needs;

5. Reach to a state of excellence through perfection (quality

and continous improvement).

The improvement of processes, is not only factored by the reduction,

but also by the elimination of waste, categorized in seven types;

superproduction (production beyond demand); wait (periods of

inactivity due to the wait time before the next step which has to

be provisioned); transportation (moving of unecessary parts in

the process); excess of processing (rework); reallocation (people

or equipment moving more than necessary for the execution of a

procedure); inventory (stock of raw materials that are not required

for the current need); defects (loss of units of production and

time waste to build them).

The pursue of quality follows two strategies: train and develop

the strength of work and make the processes consistent and

capable of attending the needs of the client. Motivated people

that embrace the culture and philosophy of the company are

the heart of this model. Each one is responsible to improve

the processes of the organization, suggest solutions and new

approaches, eventhough they are not directly responsible for this.

The flexibility of this model results from the professional workers

with mutiple abilities. These professionals do not only know their

responsability and know how to operate the tools, but they also

know how to execute activities of other professionals, offering

a better flow in their activities that compose the executions

of processes.

This model of thinking has been applied with success in many

domains, as in manufacture, destribuition, Supply Chain, deve-

lopment of products, engineering, and many others. Recently,

it has been applied to the development of software processes.

To summarize, whenever you speak Lean you speak in many

coherent ways to eliminate what is unnecessary. It means to

break up with thoughts ‘’the more the better’’, it means add more

value with less work, reduce costs, optimize the timeframe of

production and deliver and improve a better quality of products

and services. In other words, it means to eliminate everything

that does not add value and which is not important to the final

result. Whenever you adopt the philosophy lean as a new way

of thinking and acting it can be a great to transform our planet

to an inteligent planet.

For further information:http://www.lean.org

http://www.lean.org.br

Book: O Modelo Toyota, Jeffrey K. Liker (2005)

http://agilemanifesto.org/

Page 13: Transformation and Change

13

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SO DO YOU WANT TO WORK WITH IT ARCHITECTURE?Cíntia Barcelos

I still remember my Dad’s reaction when I told him that I was taking

on a new role in the company at which I had been working for 16

years.. I have a PhD in Physical Theory. He had a difficult time

accepting that I was going to be a software analyst. When I told

him that I had an excellent opportunity in the new IT architecture

area, he was a little confused, “Daughter you have not graduated

in engineering yet ?” Nevertheless, he was happy for me.

Anyway what does it mean to be an IT architect ? What is this role

about ? An IT architect solves business problems by integrating

several systems and multiple technologies, IT products and services.

This professional, has a vast technical knowledge and experience

in several disciplines. She is able to identify and evaluate the

possibilities to best suit the business needs. This is why she

must be a professional who knows the business industry well

and connects with the technology world.

The Architect has extensive knowledge of and experience in

methodology of architecture standards, system projects, technical

modeling and technical project management skills. She also

has a very good knowledge of the various tools available. The

IT architect needs to quickly understand the envioronment and

the standards established in the company for which the solution

is to be provided.

Despite having all this knowledge and tool skills, the IT archiitect

never creates a solution in isolation. She always works with a team

of specialists that owns deep knowledge in each component of

the solution. This is where the IT architect requires additional skills

such as leadership, communication, teamwork and business

skills.It is basically this group of skills that differentiate these

professionals from the others.

Another way of understanding what the IT architect does is to

focus on what she does not do. She is not a ‘’super specialist ‘’

that knows deeply all technologies and service products. However

she has a lot of experience and good knowledge on how the

groups of technology work together.

The most important in his activity is to know the role of each

technology component and the inputs and outputs rather than

how the component functions or its underlying technology. She

is not a project manager, but she needs to understand the basic

concepts of this discipline, and generally, she is best equipped

to assist the project manager and help her understand and

orient the project implementation and solution. She is also not

a consultant, but needs to know methodologies and techniques

of consulting. The IT architect is neither a super developer nor

a senior IT specialist.

IT architects are in high demand in the job market and the

demand continues to increase each year. In the market there

are already certifications in this job role offered by Open Group,

IASA, Zachman and others.

With the IT Architect, I have found my vocation. It is a I h job

and career which I have always looked for in the IT Architect role.

In the IT Architect career, as an IT architect, I execute many

functions in areas of technology leadership, and have the

opportunity to understand the business and industry issues in-

depth. Just as I have not fully understood th articles my father

has published I am sure my father has not fully understood my

work or why I find it exciting, yet.

I think I will hand him this article.

For further information:http://www.iasahome.org/web/home/certification

Page 14: Transformation and Change

14

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

QUANTUM COMPUTING Conrado Brocco Tramontini

Quantum Computing (QC) consists of processing data represented

by subatomic particles and their states. But before discussing

QC, we need to take a look at some of the principles of quantum

mechanics, the basis for the various branches of physics and

chemistry. The study of QC began in the early twentieth century

with the work of German Max Planck and Danish Niels Bohr,

Nobel laureates in Physics in 1918 and 1927, respectively.

The concepts of quantum mechanics are so unusual that Einstein

himself did not accept this theory as complete. Niels Bohr had

already warned in 1927 that “anyone not shocked by quantum

theory has not understood it”. According to quantum mechanics

the state of a physical system is the sum of all the

information that can be extracted from the system

when performing any measurement, including the

sum of these states. In other words, the state of

a physical system is the sum of all its possible

states. This phenomenon called “overlay” is one

of base principles for QC.

A theoretical experiment known as “Schrödinger’s

cat” demonstrates the strange nature of quantum

overlays. Let’s suppose a cat is stuck in a box with

a bottle of poison which is released if a reaction

occurs in a particle quantum. The cat has a 50%

chance to stay alive or die. Based on quantum mechanics this

means that due to the superimposition of the states of the particle,

the cat is alive and dead at the same time while waiting only for

the influence of the observer to set its state.

Here enters another important principle, the Heisenberg Uncertainty

Principle, which states that we cannot determine simultaneously

and accurately the position and the time of a particle. To relieve

the cat from the situation and to know what happened, you must

open the box and spy. As the measurement of the state of the

system is made, it collapses into a single state, alive or dead.

Until this occurs the states are superimposed.

If you are a little shocked by what you are reading here, it means

we are on the right track…

While a classical computer uses electrical pulses to represent

the state of the bits with values 0 or 1, QC uses particles and

quantum properties overlapped, such as atoms that are excited or

not, photons that can be simultaneously in two places, electrons

and positrons or protons and neutrons with overlapping states.

A single transistor molecule may contain several thousand protons

and neutrons that can serve as qubits. The superimposition makes

it possible to represent much more data, increasing the capacity

of communication channels, allowing QC to process exponentially

faster than traditional computing. Instead of processing one

unit of data at a given time, QC will “think” in blocks processing

several data units at once as if it was only one.

Google demonstrated in December 2009 in

a controversial quantum chip developed by

D-wave an image search engine which, by using

superimposition, operated faster than current

search engines. It is as if you could search for

your socks in all drawers at once.

Another important application is quantum

encryption where a server scrambles qubit A

into qubit B and sends respectively to machines

A and B. What the server writes in your qubit is

replicated to the qubits of machine B, without the

risk of being blocked since it makes no physical contact, but uses

another phenomenon called, not coincidentally, teleportation.

Quantum systems continue to be difficult to control because

they are sensitive to even minimal interference and because

the window of time to control the particles is still very small.

However, despite these challenges, there is consensus that

this technology has developed faster than initially imagined.

With quantum computing, can we say that classic computation

is alive and dead at the same time?

For further information:http://www.fisica.net/computacaoquantica/

http://qubit.lncc.br/index.html

Page 15: Transformation and Change

15

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE CHALLENGE OF LEGFACY SYSTEMS MODERNIZATIONVictor Amano Izawa

Most companies have a need to modernize their

systems to meet their business needs. These

updates are complex as they often involve

major changes to software that supports critical

business changes.

Modernization may be required for a variety of reasons, some of

which include 1. Compliance with regulatory laws; 2. Cut costs;

3. Optimize business processes. All of these are necessary for

an enterprise to stay ahead in a highly competitive market.

When it comes to modernization of legacy systems, cost is the

major cause that prevents companies from updating their systems.

Even though these expenditures are considered a critical

investment for the business, there is another obstacle which

discourages many ideas and proposals for modernization.

A modernization process can be long drawn and the process

may result in impact to their business process.

Does this mean that they should sacrifice their business and

remain less competitive? How can they mitigate this risk?

One solution adopted by many companies is to modernize

their infrastructure systems using distributed architectures

(high-performance clusters). Thus, companies can keep their

legacy systems with high performance and capacity, using high

processing power computers, rapid response hard drives for

large data volumes and optical fiber networks with high capacity

of data transfer.

When companies are developing a modernization strategy for

their systems, some factors, they should consider the adoption

of a software development process framework, scope and a risk

management approach.

Initially, a company must assess which of the available software

development process frameworks such as the Open Unified

Process (OpenUP) or Rational Unified Process (RUP), is best

suited to their requirements. A process framework enables an

organized and optimized modernization.

During modernization, it is possible, that many improvements

are presented as system requirements. It is important that each

one is analyzed and understood so that the defined scope is

not altered, because the inclusion of a simple enhancement can

increase the complexity of the modernization, and consequently,

impact other areas of the system. This could result in the creation

of new risks to stability and the risk of increasing the cost of

development.

Therefore, managing risk is very important for certain modifications

to avoid future complications.

The challenge of modernization can be met as long as risks,

costs and the process as a whole are managed properly. In

the current market, a company must demonstrate competence

to always innovate and stay ahead of competition and wisely

manage new challenges.

For further information: Legacy Systems: Transformation Strategies (2002) – William M. Ulrich; Prentice Hall PTR

Modernizing legacy systems: Software technologies, engineering processes, and business practices (2003) – Robert Seacord, Daniel Plakosh, Grace Lewis; Addison-Wesley

Page 16: Transformation and Change

16

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

TECHNOLOGY FOR SMART CITIESJosé Carlos Duarte Gonçalves

For quite a while we have been saying that globalization is

making the world increasingly flat, with fewer geographical

barriers. But we are beginning to realize a greater phenomenon:

the planet is becoming smarter.

When I started my career in IT, 33 years ago, the memory of

an IBM S/370 computer was able to store up to 64 Kilobytes of

information. Any mobile phone today has thousands of times

this amount of memory.

The reach of technology has also taken an enormous leap over

these years. Today there are more than four billion cell phone

users in the world, which represents nearly 70% of the world’s

population. By the end of 2010, it is estimated that there will

be more than a billion transistors for each human being, each

costing one tenth of a millionth of a cent. More than 30 billion

RFID (radio frequency identification) tags are estimated to be

in circulation and two billion people connected to the Internet.

What does it all mean? It means that for the first time in history the

digital and physical world infrastructures are converging. Virtually

anything can become digitally connected for a low cost. The world

is moving towards a trillion connected things – the “Internet of

Things” made up of cars, refrigerators, buildings, highways, etc.

But to build a truly smarter world we increasingly need to worry

about the environment, the sustainability of the planet and the

depletion of its natural resources.

Today we have the opportunity to use technology to solve or

minimize major problems of society, such as traffic jams, drinking

water conservation, distribution of food and energy and health

services, among others.

One of the most critical issues is transport with chaotic traffic

jams in all major cities.

Just in the city of São Paulo the cost of traffic jams, taking into

consideration the idle time of commuters in peak transit times,

has reached more than R$ 27 billion per year. If we also consider

the cost of fuel and the impact of pollutants on the health of the

population, we end up with an annual surcharge of R$ 7 billion.

How to address this challenge? Cities such as Stockholm,

Singapore, London and Brisbane are already seeking smart

solutions to better manage traffic and reduce pollution. The

initiatives range from traffic forecasting to intelligent and dynamic

toll systems. In Stockholm, with the implementation of the urban

toll, traffic jams have decreased by 25%, the pollution levels

by 40% and the use of public transport has increased by 40

thousand people per day.

Government leaders and institutions need to identify the right

opportunities and obtain the necessary investment through

incentives and support programs. Becoming smarter applies

not only to large corporations but also to small and medium sized

businesses, the engines of our economic growth.

We will be increasingly evaluated based on the way we apply

our knowledge and our capacity to solve big problems. We must

embrace the challenge in order to seek to solve the problems

and make cities smarter.

For further information: http://www.ibm.com/innovation/us/thesmartercity

http://cities.media.mit.edu/

http://www.smartcities.info/

Page 17: Transformation and Change

17

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

EVERYTHING AS A SERVICESergio Varga

The evolution and robustness of virtualization technologies, the

advances in the performance and capacity of servers and network

components, and the increase of multi-tenant applications have

allowed companies to provide a variety of solutions using the

“as a Service (aaS)” model. Applications that until recently were

not imagined to follow this model now do so. For example, in late

2009 IBM released TivoliLive, a monitoring environment that uses

the “Monitoring as a Service” model. Other examples include

Box.net and Salesforce.com that integrate document storage

and customer relationship management offering new combined

services based on the “Software as a Service” (SaaS) model.

Communication as a Service

(CaaS), Infrastructure as a Service

(IaaS), Platform as a Service

(PaaS), and Service Management

as a Service (SMaaS) are other

examples of this service model

that has gained wide adoption

in the last few years. According

to an IDC forecast this market

will grow from US$ 17.4 Billion in

2009 to more than US$ 44 Billion

in 2013. Research from Saugatuck

Technologies outlines that by the end of 2012 70% of the small

and medium size companies and 60% of large companies will

have at least one SaaS application. This shows that the service

model will not be tied to a particular company size.

The first large class of applications to leverage this service

model were Customer Relationship Management (CRM)

applications mainly targeting end-users. Following CRM other

applications began to be ported to this model. Today the long

list of applications includes custom applications developed

in-house. Other relevant use cases of aaS solutions are pilot

projects and analysis of applications to be implemented within

companies.

An important reason for the proliferation of aaS applications

is cloud computing becoming a reality. Several companies

are making cloud-based infrastructure available: Amazon

released Elastic Compute Cloud in 2006 and IBM released

Cloudburst in 2009.

However, a 2008 IDC study identified four major challenges

with making the “as a Service” model more pervasive: security,

performance, availability, and integration. Enhancing the security

of the deployed solutions and guaranteeing data privacy are key

priorities for companies that offer applications using the “as a

Service” model. Another priority is making applications available

at an acceptable performance level.

In addition to deploying servers

with high processing power, nu-

merous network presence points

around the globe are necessary

to minimize network latency. High

availability in these environments

requires continuity planning and

uninterrupted monitoring. A further

challenge is enabling solutions that

are easy to integrate with other

client systems, possibly hosted

on different cloud platforms in the future.

Despite these challenges, the easy implementation, the low

cost, and the lack of need to invest in hardware and software are

the greatest benefits for clients adopting applications offered

using this service model.

What might we witness in the near future? IT companies will

compete in this market where consumers will not invest heavily in

IT assets but will increasingly use business solutions as services.

For further information: http://blogs.idc.com/ie/?p=543

www.ibm.com/services/us/gts/flash/tivoli_live.swf

http://www.saugatech.com/

Page 18: Transformation and Change

18

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE FOG AND THE FROGWilson E. Cruz

One of the most disturbing facts of our time is the excess of stimuli

that today goes through our eyes and ears and with any luck,

invades our brains. Every time someone comes along saying:

“it’s a lot of information! I can’t manage it!”

The phenomenon, pretty new, growing dizzyingly, and already

at the threshold of sanity, has disturbed at both personal and

professional levels the majority of the “connected” people.

To help me in the diagnosis of the situation, and open the door to

some themes of reflection, I use here Dee Hock, the founder of

the concept that defines the VISA Organization, and his fantastic

book “Birth of the Chaordic Age”: “Over time, the data turns

into information, information turns

into knowledge, knowledge turns

into understanding and, after a

long time (...) understanding can

transform into wisdom. (...). Native

societies (...) had time to develop

the understanding and wisdom”.

Note that the word “time” appears

three times.

Leveraging the fifth anniversary

of the Mini Paper Series, and

its tradition as an instrument of

dissemination, I venture some

issues and ideas that might bring some light to those who seek

direction in the middle of mist. Let’s start with the questions:

• How many Mini Papers have you read? More importantly,

how many of them have you sought information from the

section “To find out more”?

• Why does the result of your search in those famous sites go

out in that order, even though all the first hundred answers

have 100% adherence to your search argument?

• Finally, what does a frog do when it is in the middle of a fog?

If your answers did not bring you the feeling that you are just

scratching the surface of the most important issues of your life,

don’t waste your time with the rest of this article. Go to the next

subject, and then to the next. If, on the other hand, the answers

left you a bit uncomfortable or wary, it is worthwhile to reflect on

some points (reflect, not necessarily agree).

• Get out of that trap that “the most accessed is the best”. In

any popular website, at the top of the list of recommendations,

appears the most downloaded, the most widely read news,

and the most watched video. Who ensures that the quantity

(especially the amount generated by others) guarantees

you quality?

• Create, grow and retain your sources list, based on your

system of values and preferences. You pay your bills, so

you are not a slave to the “universal encyclopedia” of others.

• Pay attention and preferably formalize your rules and merit

criteria. What is good for you? What matters for you?

• Set aside time to discuss. It has

been said here, but it is worth

repeating that at the end of

the frantic sequence ranging

from noise to the wisdom, the

discussion is the final filter.

• Finally, slow down. Pre-med-i-

tat-ed-ly. Cal-cu-lat-ed-ly. Note

that, right near the ground there

is less fog, and give little leaps,

shorter and accurate, spending

more time on the ground to look

around and evaluate the world.

In the middle of this, how about the birthday of the TLCBR

(six years!) and the Mini Paper Series (five years)? They can

be disseminators of information and useful knowledge, which

is quite much in this dense and low fog. However, I hope for

more. I hope to see them as the “native society” by Dee Hock,

seeking the thought, reflection, and with this, the understanding

and the wisdom.

For further information: http://www.onevoeiroeosapo.blog.br

HOCK, Dee - “Birth of the Chaordic Age” – Berrett-Koehler Publishers; 1st Edition/ 1st Printing edition (January 1, 2000)

Page 19: Transformation and Change

19

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

BEST PRACTICES IN REQUIREMENTS ELICITATIONCássio Campos Silva

The activity of requirements elicitation is one of the most important

software engineering practices. Through this activity, the aim is

the understanding of user needs and business requirements,

in order to address them later through a technological solution.

In specialized literature, some works adopt the term elicitation,

instead of gathering, because this practice is not only the

gathering of requirements, but also the identification of facts

that compose them and the problems to be solved. For being

an interpersonal activity, this practice is very dependent on the

analyst’s understanding skills and on the user skills in expressing

their needs.

In a survey conducted by the Standish Group, five critical factors

for the success of a project were mapped: user engagement,

executive management support, clear descriptions of the

requirements, proper planning, and realistic expectations. Note

that the text in bold are the factors directly related to requirements.

Considering the complexity of requirements elicitation activities

and the dependence of the relationship between involved parties,

analysts should adopt a few good practices in order to facilitate

this process:

Preparation: Prepare in advance and in a proper manner for

the planned activities, which are generally conducted through

interviews, questionnaires, brainstorms and workshops.

Stakeholders: Map (in advance) who will be the participants

of the process, what are their roles in the project and in the

organization and what are their levels of knowledge and

influence. It is imperative that the right people are involved

as soon as possible.

Posture: Always look for effectiveness in communications, and

try to demonstrate prudence during conflict situations.

Understanding: Try to focus on understanding the problem and

avoid precipitate conclusions. In this first moment, the most

important thing is to know how to listen.

Past experiences: Positively use previous experiences to better

understand the problem. Avoid considering that the current

problem is the same as any other that has been solved in a

past client or project.

Documentation: Describe the problem in a clear and objective

manner. In case of doubt, consult the client and avoid inferences.

Try to use examples cited by stakeholders. The adoption of diagrams

and figures always help in the documentation and understanding

of the requirements. The creation of prototypes also contributes

to the common understanding of the proposed solution.

Validation: Ensure that stakeholders validate the documentation,

verifying the understanding of the problem and the desired

improvements and eventually make requests for changes.

At the end of the process it might be possible to demonstrate, in

documental form, the understanding of the problem, customer

needs and opportunities for improvements. This will delimit the

scope of the project and should guide the design of the solution,

as well as the project planning.

The measurement of the size, complexity and risks of a project

will depend on the quality and coherence of the requirements. It

is crucial that this activity is performed in a criterious and detailed

manner, because any failure in this moment could generate

unsuccessful projects, financial losses and unsatisfied customers.

For further information: http://en.wikipedia.org/wiki/Requirements_elicitation

http://www.volere.co.uk

Book: Requirements Engineering 2nd Edition-Ken Jackson

Page 20: Transformation and Change

20

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE MAN WHO SAW THE SHAPE OF THINGSFábio Gandour e Kiran Mantripragada

Benoît Mandelbrot died on October 14, 2010. He could have

been just another exotic name of science but he was much more

than that. Polish-born from a Jewish family, Mandelbrot was born

in Warsaw in 1924, into a family with a strong academic tradition.

He first studied in France and then in the United States. In 1958,

he began working as a scientist at the IBM T.J. Watson Research

Lab, where he advanced to IBM Fellow and Scientist Emeritus.

Benoît Mandelbrot was the mathematician who best understood

and published a new formulation for

representing the natural phenomena.

His understanding has led to the

creation of the word “fractal”, inspired

by the Latin word fractus meaning

broken, or shattered. He affirmed

that nature is governed by Fractal

geometry, because Euclidean

geometry couldn’t describe more

complex natural forms such as

clouds, trees, the path of rivers and

mountain ranges.

The classical Euclidean Geometry is

built from 3 elements: point, line and

plane. The point has no dimension,

i.e., it is a zero-dimensional element.

The line has a single dimension, the

length, and therefore, can provide a measurable quantity. Finally,

the plane presents two dimensions, length and width. With these

3 elements, Euclid of Alexandria, who lived between 360 and

295 B.C., built the Euclidean geometry.

Some mathematicians, such as Bernhard Riemann, observed

that the concepts described by Euclid can be extrapolated to

objects of “n” dimensions, such as hiperesferas, hyperplanes,

n-dimensional simplex and other “figures”.

Mandelbrot, with a brilliant observation noted that there are

“broken” dimensions, meaning that there are “n-dimensional”

objects, where “n” is a real number. Thus, if a line has a single

dimension and the plane has two dimensions, what would be a

“1.5 dimensional” object? In fact, Mandelbrot showed that such

objects exist and can be described by the theory which he called

the fractal geometry.

Fractal geometry study objects with interesting properties, as for

example, the Sierpinski Carpet, which is the result of successive

removal of the central square, after the division of the original

major square into nine equal and smaller squares, forming an

object with an area that tends to zero and perimeter that tends

to infinity. The image shown below is an extrapolation from the

“Sierpinski Carpet” to the “Sierpinski Cube”. Observe that the

cracks [fracture] of a dimension into another minor, of the same

shape and contained within the first, creates an endless dimension.

Benoît Mandelbrot may have been

a victim of his own creation because

the images constructed from the

Fractal geometry had a strong

appeal to the world of the arts.

This appeal made Fractal geo-

metry to be seen and used more

as an illustration tool than as a

mathematical model for repre-

sentation of nature. For example,

the search of the word “fractal” on

Google Images features more than

1 million results, all of them of great

visual appeal.

For being a mathematician, Man-

delbrot has never been considered a candidate for the Nobel

Prize, because there is no such category in the awards. But the

practical use of Fractal geometry can, in the future, recognize

his contribution to other areas, such as Physics or Economics.

If anyone shows, for example, that the evolution of financial crises

has also a fractal behavior, justice will have been made. In another

line, Stephen Wolfram and cellular automata theory, explained

in his book “A New Kind of Science”, can be the beginning of

the correction of this historical misconception.

For further information: http://tinyurl.com/34f59ty

http://www.math.yale.edu/mandelbrot/

http://www.wolframscience.com/

Page 21: Transformation and Change

21

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SOFTWARE METRICSDaniela Marques

The fact that quality is an important item for any product or service

is not disputed. Software that is used to support the various

business lines in companies must also demonstrate higher quality

levels with each new version. It is also a fact that new versions

are required to meet new demands, as well as offering new

features to customers. This brings up the question of how to

increase productivity in software development while maintaining

or increasing quality standards.

Software metrics are among the tools employed by Software

Engineering. These metrics can be as considered a set of attributes

of the software development cycle, that were previously known

and documented.

Despite the existence of

IEEE 1061-1998, a lack of

consensus on the use of these

metrics still persists, though

few doubts remain that they

are essential to the software

development process. After

all, with metrics it is possible to

perform analysis on informa-

tion collected in order to be

able to track software development, make plans to keep the

project on schedule and achieve the desired level of quality.

Regarding quality, it is important to stress that everyone involved in

the process of developing software must participate in determining

the software quality levels, as well as in the resolution of any non-

compliance to the originally specified requirements.

Software metrics can be divided into direct measures (quantitative)

and indirect measures (qualitative). Direct measures are those

that represent an observed quantity, such as cost, effort, number

of lines of code, execution time and number of defects. Indirect

measures are those that require analysis and are related to the

functionality, quality, complexity and maintainability.

Software metrics directly assist in project planning. For example,

the metric “LOC (Lines of Code)” is used to estimate time and

cost by counting lines of code.

The productivity during each test (derived from the execution time)

and the number of defects found provide the information needed

to estimate project completion and the effort required for each

testing phase. The amount of defects found also provides data

for determining the quality of the software (an indirect measure)

and root cause analysis of defects helps to formalize a plan for

improvements in future versions (see example in chart).

There are several existing metrics with many applications

in the software life cycle. It is the responsibility of the project

manager to coordinate ac-

tions to determine the quality

standards required and defi-

ne which elements should be

measured and monitored

during the cycle. Collecting

this information allows not

only a better monitoring of

the software development

process, but also the quali-

tative analysis of the software as a product. Historical metrics

allow change requests or new feature proposals to be more

accurately estimated, since similar projects tend to go through

the same problems and solutions.

To maintain or raise the software quality level it is essential to

measure and monitor throughout the development cycle. Metrics

provide not only a vision of the real situation but also allow you to

plan and take action in the search for continuous improvement.

For further information: http://www.kaner.com/pdfs/metrics2004.pdf

http://standards.ieee.org/findstds/standard/1061-1998.html

Qualitative analysis of defects found

Data Code Environment Requirements

Page 22: Transformation and Change

22

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

COMPETENCY-BASED MANAGEMENT: IT’S KSA TIMEPablo Gonzalez

It can be said that managing people is a constantly evolving

science filled with challenges. In this context, a management

model that is becoming increasingly popular in organizations

is the so-called competency-based management model, with

a main goal to nurture and better prepare employees for higher

productivity and suitability to the business, thus enhancing the

intellectual capital of the organization.

Based on this, managing competencies means to coordinate

and encourage employees to reduce their gaps (points for

improvement), know what they are capable of executing (current

competencies) and understand what the company expects of

them (competencies required).

The term “competency” can be represented

by three correlated properties summarized

by the acronym KSA — Knowledge, Skill and

Attitude. Knowledge refers to the assimilation

of information one has acquired throughout life,

and that impacts their judgment or behavior

— the experience. Skill refers to the productive

application of knowledge — the know-how.

Finally, Attitude refers to one’s conduct in

different situations and in society — the action.

To illustrate the application of this concept in

an organization let us imagine that, on a scale of zero to ten, your

skill in “negotiation” is six. Assuming the minimum level required

by the company to be ten, we can say that you have a gap of

value four in this competency. Based on such result, together

with results of other techniques for performance analysis such

as 360-degree feedback, a plan is created to reduce the gaps

and through which the company will suggest how and when

these gaps will be addressed. The goal is to enhance existing

competencies aligned to the strategic objectives of the organization

through an individual professional development plan.

The implementation of competency-based management is not

complex but requires a few specific methods and instruments.

Having a well-defined mission, vision, values, strategic goals

and processes are some of the key steps for its adoption.

HR is responsible for setting the array of required competencies

in collaboration with managers of each area. Another essential

factor is to maintain active communication throughout the

project, in order to clarify objectives and maintain evaluated

employees awareness of the outcomes. It is also noteworthy

that the lack of preparation for evaluators to provide feedback as

well as resistance from employees might hinder model adoption;

this difficulty, however, can be mitigated through prior training

and awareness.

The use of technology may be an accelerator since it assists in

the identification and storage of competencies over time, as well

as allowing for the generation of charts and reports for analysis.

Following this model, the company can better structure the

professional roles and competencies that

are essential for their business, increase

task efficiency, identify talent, and ensure

professionals have the necessary competitive

edge to succeed.

Thus, competency management is flexible

enough to be adopted by companies of all

sizes, from small to multinational organizations,

proving to be feasible and effective in multiple

scenarios.

Companies such as Coca-Cola, IBM, Embraer,

Petrobras and Shell, among many others, have already adopted

measures aimed at competency-based management and report

significant improvements in terms of task effectiveness, employee

recognition and motivation, among other benefits.

In short, it is up to the company to use this model in a cycle

of continuous improvement in which, at every new project or

evaluation cycle, new indicators should be created, and old ones

re-evaluated in order to measure results and plan the next steps.

It is within this context that competency-based management leads

to corporate excellence and satisfaction of those who represent

the greatest asset of a company: its people.

For further information: http://slidesha.re/19HNtL

http://bit.ly/fMylgE

http://www.gestaoporcompetencias.com.br

Page 23: Transformation and Change

23

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

DAILY SCRUM FOR EVERYONE!Renato Barbieri

It’s lunchtime at the Morumbi

Shopping mall in Sao Paulo.

I arrived early since – as

regulars know – it is the

only way to secure spots

at the larger tables on the

mall’s restaurants without a

reservation. At the chosen

restaurant, waiters and

maitres are all standing up,

gathered in a circle. The

maitres lead a quick meeting

with general guidelines and

a few specifics. New waiters are presented to the team and

receive a warm welcome. Some waiters share anecdotes, ask

quick questions and after ten to fifteen minutes the meeting is

closed. This episode occurs daily in all restaurants of this chain,

according to one of the maitres. Scene cut.

The Agile Movement was born as an initiative of software developers

with the goal of finding alternatives to traditional development

methods so as to turn this activity into something lighter and nimbler;

this undertaking resulted in the publication of the Agile Manifest

in February 2001. Among the new methodologies that emerged

from this movement, eXtreme Programming (XP) preaches, as

one of its basic principles, to hold daily meetings taking no longer

than fifteen minutes, in which all participants remain standing and

share experiences and issues at hand. Another agile methodology,

Scrum, also encourages quick daily meetings known as Daily

Scrum Meetings (or simply Daily Scrum), with the same purpose:

share experiences and issues in a fast, agile and frequent way.

In a Daily Scrum, each participant must answer three basic

questions:

• What has been done since the last meeting?

• What do I intend to do until the next meeting?

• What prevents me from proceeding?

The idea is not to turn those moments into mere status report

meetings, but to share what each member has done and will

do to achieve the group’s collective goal. Issues and inquiries

are only briefly mentioned, as their details and solutions should

be tackled externally with the appropriate people.

The Scrum methodology includes a facilitator in the team, the

Scrum Master, who has a fundamental role in Daily Scrum.

He acts as a moderator and the guardian of the methodology,

not allowing discussions to extend beyond the given time and

scope. He keeps the focus on essentials and points out any

overdoings and distractions.

The practice of Daily Scrum can be adopted in many situations

beyond software development. We have practical usage examples

in support teams and restaurants (as shown in the example at

the beginning of this article,), adapted to their needs but keeping

its primary objective: collaboration in teamwork.

And why not adapt a good idea? It is common to think of

methodologies as “straitjackets” that, instead of supporting

and helping professionals, restrict actions and inhibit creativity.

This is an outdated concept unfitting of the Agile Movement.

Best practices are flexible by nature and allow for reviewing of

its own concepts and implementations. The Daily Scrum is no

exception and doesn’t even need to be daily (as the original

name suggests) but should be frequent. And most important

of all: these meetings should foster the union of its participants

and ensure that, for each one of them, they all collaborate to

achieve a common goal.

For further information: http://www.agilemanifesto.org

http://www.scrumalliance.org

http://www.extremeprogramming.org

Page 24: Transformation and Change

24

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

HOW TO PLEASE THE CUSTOMER WHO CONTRACTS SERVICES?Rosemeire Araujo Oikawa

Imagine the following real-life situations:

• Running out of towels in a hotel room after returning from a

whole day at the beach;

• Receiving your car from the valet with scratches after a

perfect dinner in a restaurant;

• Waiting ten minutes for your call to be answered by a Call

Center and then not getting your problem solved.

The list of adverse situations that may happen when contracting

services is huge. As customers have become more demanding

and aware of their rights, the tendency is that this list continues

to grow so service provider companies must be prepared to

deal with it.

Nowadays, the services market

represents 68.5% of the world’s

GDP. Companies have learned

to outsource what is not their

business’ focus, to sell products

as services, to provide specialized

services, and many are learning

how to work in a process-driven

fashion. With all that being said,

it seems some forget the most

important thing: to meet a customer’s expectations.

Establishing a SLA (Service Level Agreement) is the key to start

a successful relationship with the client. This document is the

means by which the service provider translates the customer’s

expectations on goals to be delivered, penalties which may be

applied and duties that should be discharged. The challenge

here is to have well-defined SLAs, because faults occur precisely

when client’s expectations are not correctly translated in this

agreement.

In order to have well-defined SLAs, the following aspects should

be taken into consideration:

• To understand the needs of the service’s users (‘user’ is

the person who uses the service, and ‘customer’ is the one

who pays for it);

• To understand how the service will support a customer’s

business and the impacts it can have on them;

• To establish achievable and truly measurable levels;

• To structure the agreement with a service provider mindset,

and not with one of a product seller;

• To create a cost model that supports service levels offered

to the client;

• To specify service levels for

all main service components,

including outsourced parts;

• To define agreements with the

internal and external teams res-

ponsible for service execution.

The effectiveness in defining and

managing SLAs is the basis for

the delivery of quality services.

The formalization of a client’s

expectations and the clear under-

standing between parties about what was contracted and what

will be delivered shape the perception about a service, making

it measurable and precise.

To achieve a SLA is to deliver what is expected, while exceeding

it might compromise the cost and even to pass unnoticed by

the client. On the other hand, failing SLAs may compromise

the relationship with the client, or the perception of the overall

service quality. SLAs should be more than just measurements,

but rather an instrument supporting the continuous improvement

of services and companies’ business processes.

For further information: http://www.gartner.com/DisplayDocument?id=314581

Source of data: Banco Mundial http://data.worldbank.org

Agriculture Manufacture

Service10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

1800 1815 1830 1845 1860 1875 1890 1905 1920 1935 1950 1965 1980 1995 2000

Fonte dos dados: Banco Mundial (http://data.worldbank.org)

0%

Page 25: Transformation and Change

25

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SPECIAL IBM CENTENARY: SAGE, A CRADLE FOR INNOVATIONMarcelo Sávio

The United Stated Air force, driven by the impact of the explosion

of soviet experimental atomic bombs in the early 50, initiated

an ambitious project called SAGE (Semi-Automatic Ground

Environment) for the creation and implementation of a defense

system against bombers.

This system was deployed between 1957 and 1961 and operated

in a distributed fashion over twenty-three data processing centers

installed in huge bunkers in North America, each containing two

large computers called AN/FSQ-7 (Army-Navy Fixed Special

eQuipment). This machine, specially designed by IBM, was

labeled an “electronic brain” by the press headlines of the time,

and to this date it is considered the largest computer that has

ever existed: it weighed over 250 tons and contained over 50

thousand electronic valves, consuming 3 megawatts of electricity.

This system processed data from hundreds of radar stations,

calculated air routes and compared these against stored data

to enable quick and reliable decision-making to defend against

enemy bombers, potentially loaded with highly destructive

nuclear weapons.

To make such complexity work, a number of innovations were

introduced in the project, such as the use of modems for digital

communication through ordinary telephone lines, interactive video

monitors, computer graphics, magnetic-core memories, software

engineering methods (the system had more than 500 thousand lines

of code written by hundreds of programmers), error-detection and

system maintenance techniques, real-time distributed processing,

and high availability operations (each bunker had always one of

its two computers running in stand-by mode).

The experience acquired by participating companies (Bell,

Burroughs, IBM, MIT, SDC and Western Electric) and individuals

were subsequently extended to other military and civilian systems

projects. For instance, some worked on the design of ARPANET,

the computer network that resulted in the Internet that we all use.

Others worked in the system of civil air traffic control for the FAA

(Federal Aviation Administration) in the United States. SAGE

also served as a model for the SABRE system (semi-automatic

Business-Related Environment), created by IBM in 1964 to track

American Airlines flight reservations in real time – a system still

running to this date.

SAGE was operational until the end of 1983; nonetheless, when it

was completed in early 1962, the main threats to air safety were

no longer bombers, but fast intercontinental ballistic missiles

against which the system was rendered useless. Despite its

premature obsolescence , SAGE marks an important milestone

in the history of science and technology: by becoming the first

real-time, geographically distributed online system in the world,

it explored uncharted territory, with the help of innovative ideas

and technologies that remarkably contributed to raise the then-

newborn computer industry.

For further information: http://www.ibm.com/ibm100/us/en/icons/sage/

http://www.youtube.com/watch?v=iCCL4INQcFo

Page 26: Transformation and Change

26

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

KNOWLEDGE INTEGRATION: THE CONSULTANT’S CHALLENGEMárcia Vieira

Current society, whichis being called “Hypermodern”, promotes

a culture marked by excessive consumption of information,

disposable things and temporary relationships. The pace of

change and the lack of time lead to an accelerated way of life,

and a state of constant attention and search for information about

multiple subjects. This new scenario generates job opportunities

for consulting on various organizational disciplines, such as

Corporate Governance, Information Technology, Marketing and

Sales, amongst others.

According to the Brazilian Institute of Organization Consultants,

consultancy work can be defined as “the

interactive process between a change agent

(external and/or internal) and his/her client,

where the agent takes the responsibility to

help client’s executives and employees in

the decision making, though not having

direct control of the situation that should

be changed by him/her”.

As a change agent, the consultant must be

skilled in identifying and solving problems,

and demonstrate a passion for disseminating

knowledge. When this does not occur, there is a risk of being

discarded by the logic of Hypermodernity. It basically means that

in order to be a good consultant in any organizational discipline,

one must seek useful, practical and applicable knowledge, with

a result-driven focus. Keeping your skills current, and extending

one’s knowledge, is the greatest challenge and, at the same time,

one of the biggest motivators in the consulting professional career.

Good memories in my career as consultant remind me of

distinguished professionals who had the ability to provide

creative solutions and to achieve great results from a wide range

of information and acquired knowledge.

As knowledge is the consultant’s essential raw material, one can

state that the knowledge generation process is the starting point,

where consultants must always seek a cause and effect, and

manage customer expectations regarding problem resolutions.

Knowledge generation establishes a continuous cycle and a

synergistic relationship between explicit and tacit knowledge.

Explicit knowledge, in general, is easier to get, through corporate

knowledge bases, courses, training, or available media. Yet, the

tacit knowledge results from a professional’s work experience. In

a globalized world, it becomes more difficult to integrate these

knowledge types. For this reason, it is vital

that the consultant maintains an extensive

relationship network, and develops new ways

of acting together, with individuals and groups

(teamwork), for the purpose of integrating

the parties and problem views, as well as

deepening all its aspects. The knowledge

integration competency and the ability to get

an overview of the whole are fundamental

to the consultant.

In addition, to get the understanding of how

the concepts are built and articulated, and not just to accept

the parties’ point of view, helps to identify problems, suggest

changes and bring other cultures’ view. The consultant is one that

in addition to having the know how must learn how to think and,

therefore, must have a high level of education and an attitude

of lifelong learning, where learning how to learn and team work

skills are a guiding principle.

For further information: http://www.ibco.org.br/

Books: Apprentices and Masters: The new culture of learning. Juan Ignácio Pozo (2002) and Introduction to thought complex. Edgar Morin (2003)

Page 27: Transformation and Change

27

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SPECIAL IBM CENTENARY: IBM RAMAC: THE BEGINNING OF A NEW ERA IN COMMERCIAL COMPUTINGJosé Alcino Brás

During the 1950s computers were no longer restricted to military

applications, and started to be required for the automation of

enterprise business processes. In order to meet this market

demand, in 1956, IBM released the IBM 305 RAMAC (Random

Access Method of Accounting and Control), its first mass-

produced computer, designed to run accounting and commercial

transactions control applications, such as order processing,

inventory control and payroll.

The big news with the 305 RAMAC wasn’t its processing power

but the use of a new peripheral device for data I/O called “IBM

350 disk storage unit”, which allowed very fast data writing and

reading compared to other storage media used until then. Having

the size of two refrigerators, the IBM 350 consisted of 50 disks

of 60 cm diameter centrally mounted on a single pivot propelled

by an engine, adding up to 5 megabytes of capacity accessed

at a rate of 10 kilobytes per second.

The RAMAC disk drive represented a true milestone in

technology evolution, in which several technical barriers were

overcome, such as finding the suitable material for making the

disk and the magnetic surface, and creating a mechanism

for reading and writing with a fast and accurate movement,

by positioning it in the physical location of the data which

spun at the speed of 1,200 rotations per minute. Besides

that, it had to guarantee not to physically touch the magnetic

disk surface, by injecting compressed air between the disk’s

surface and the read and write head.

By allowing the information to be written, read and changed in a

few seconds, and, more important, to be accessed in a random

fashion, it eliminated the need for sorting before data processing,

which until then was a requirement imposed by the technology

of magnetic tape or punch card equipment that were the most

used data store methods at the time.

RAMAC’s success made its production achieve more than 1,000

units sold and installed around the world, including Brazil, where

it arrived in 1961. This machine ended the era of punch cards

and introduced a new era, where corporations began to use

computers to conduct and streamline their businesses, making

use of online transaction processing and storing large volumes

of data on magnetic disks.

The technology introduced by RAMAC was the seed that originated

the magnetic disks produced up to the present day — formerly

called “winchesters”, then “hard drives” and just “HDs”, today —

that nowadays are available on the market with a storage capacity

greater than 2 terabytes, spinning at 15 thousand rotations per

minute and reaching data transfer rates that exceed 200 megabytes

per second (more than 20 thousand times higher than IBM 350).

Maybe that group of engineers from IBM’s lab did not imagine

that RAMAC would represent the beginning of an era to one of

the most important technologies in the computer industry? One

that would completely change the storage and processing of

information, an intangible good, with a great value to several

society segments, which in turn keep demanding and generating

even more information, in a last year’s estimated growth volume

of more than 1 zettabytes (1 million terabytes). Bring on the disks

to store it all!

For further information: http://www.ibm.com/ibm100/us/en/icons/ramac/

http://www.youtube.com/watch?v=CVIKk7mBELI

http://www.youtube.com/watch?v=zOD1umMX2s8

Page 28: Transformation and Change

28

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE EVOLUTION OF THE IT SERVICES DELIVERY MODELEduardo Sofiati

The IT services market has evolved significantly in recent years.

Providers and customers aim to enlarge the terms of service

contracts, in order to obtain greater benefits – not just cost-reduction

– through better alignment between technology solutions and

business requirements.

The traditional model has specialized providers to deliver repetitive

services, which are based on efficiencies and scalability gains,

providing competitiveness. Since the IT services market has

many competitors, each provider aims to propose differentials

in order to attract and keep customers and thereby increase their

participation in this market.

Some providers are focused on models

that add more value to the offered services,

meeting the business requirements of their

customers. The provider, in this case, is

perceived by the client as a strategic partner

rather than a supplier and tends not to offer

commodities, but rather solutions.

As an example, we can mention the evolution

in the service offerings recently launched by

outsourcing segment of infrastructure and telecommunications,

which are aligned with latest technology trends, such as Cloud

Computing, SaaS (Software as a Service), Virtual Desktops,

Unified Communications and Network Security. This evolution

is transforming the traditional outsourcing model into a utility-

based model, which changes the concept of IT asset ownership.

According to Gartner, by 2012, 20% of enterprises will no longer

have IT assets, which turns into opportunities for providers

to leverage completed offers, capable of delivering services with

more agility and quality through the adoption of leading-edge

technologies

Regarding the performance of the service providers, there has

been a lot of evolution in recent years.

Through the use of Key Performance Indicators (KPIs) they can

measure the effectiveness of the processes and technology

solutions that have been applied on contracts. SLAs (Service Level

Agreements), which been driving the outsourcing contracts for

quite some time, have also evolved in the definition of indicators

more aligned to services and systems availability which impact

clients business.

In order to survive and grow in this market so stirred up, while

still maintaining healthy results, service companies must follow

strategies which are being adopted, mainly in global companies:

Standardization: maximizing the use of common models for the

major part of the services portfolio, in order to enable repetition

in the delivery, resulting in economies of

scale and simplification in the structures

of the delivery;

Integration: Execute delivery models, as

efficiently as possible, using all the power

that the provider has, in order to obtain the

lowest possible cost with people taking

advantage of skills availability existent in

each region;

Automation: Reduce manual tasks at the

maximum in order to lower down costs and raise the quality of

service deliverables.

It is possible to reflect on the remarkable developments in IT

service delivery over the years. In the old format, providers

created a new approach for each project, offering customized

models for each customer, an inefficient method that generates

wastage of time and money. Currently they are looking for ways

to simplify the design of projects, particularly their bases, through

standardized and simplified models, based on best practices

and industry knowledge. With that, more time is used to solve

business problems specific to each customer, turning IT into an

arm that stimulates growth and generates savings, making the

company prepared to meet new challenges.

For further information: http://www.ibm.com/services

Page 29: Transformation and Change

29

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SPECIAL IBM CENTENARY: IBM 1401, WHEN TIMES WERE DIFFERENT...José Carlos Milano

Given the correct proportions, it could be said that the IBM 1401

computer was, in the Sixties, so important for the dissemination

of enterprise computing of small and medium-sized companies

as the PC is for consumers of today. To get an idea, over ten

thousand units of this model had been sold, when many readers

of this article had not even been born. Those were different

times for sure...

The 1401 was the first fully transistorized computer manufactured

by IBM (when it replaced the vacuum tubes). It was smaller and

more durable than its predecessors. It was released in 1959 and

sold until 1971 (and many continued to work into the 1980s). Its

success was so great and so much code had been developed

for it, that IBM was forced to create an emulator in microcode to

run the 1401 programs in the mainframe models that followed,

starting with the System/360, released in 1964. Surprisingly, many

of these emulators continued to be used in other mainframe

models until 2000, when finally the remaining programs for the

1401 had to be rewritten because of Y2K (millennium “bug”).

The ease of programming languages through the SPS (Symbolic

Programming System) and then with Autocoder were largely

responsible for the success of the 1401. Earlier, most computing

environments (called Data Processing Centers) consisted of the

“mainframe”, the 1401 itself, and the “frames” of the punch unit

and card reader (1402) and the printer (1403). Tape drives or

magnetic disks did not exist yet.

Since operating systems did not exist either, the creation of

executable code, from symbolic programming made by the user,

was very peculiar. The SPS program preceded the program written

by the user. All of the programming was done with punched

cards. By pressing the “load” button in the 1402 card reader, the

SPS program was loaded into the memory of the 1401 that would

then read and translate the user-written program to executable

code. Actually, the translation of the user program took place

in two stages. In the first, it generated a bunch of cards with

the partial translation. Those cards were then fed back into the

card reader of the 1402. Finally, the cards with the final program

were generated and were ready for execution.

The smallest addressable memory unit in the 1401 was the

“character”, comprised of eight bits (physically a ferrite core

for each bit). This “character” would be the equivalent of what

we now call “byte”, a term that just happened to exist in the

era of the System/360. Out of these eight bits, six were used

to represent the character, the seventh was the parity bit and

the eighth represented a “word mark”. A “word” in the 1401

represented a variable sequence of consecutive characters,

the last of which was called “word mark”. That’s why the 1401

was known as a machine that processed words of varying

sizes. Each instruction in its machine language could be 1, 4,

7 or 8 characters long.

Despite all the beauty of technology, it was not trivial to program

these fantastic machines, especially if compared with current

systems development environments. In the past 50 years, the

facilities and programming techniques today allow an enormous

productivity in code creation. Does anyone dare to estimate

how many lines of program code must exist in the world today?

For further information: http://www.ibm.com/ibm100/us/en/icons/mainframe/

http://ibm-1401.info/index.html

Page 30: Transformation and Change

30

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE INTERNET OF THINGSJosé Carlos Duarte Gonçalves

The Internet was created by the Americans in 1969 as a network

aiming to share expensive and scarce computer resources between

universities funded by ARPA (Advanced Research Projects Agency),

an agency for the promotion of research from the United States

Department of Defense. The ARPANET (so named in the beginning)

was designed to support heterogeneous computing environments

and offer the maximum possible resilience, even in the event of

a failure or unavailability of some network node. This became

possible through the use of packet routing systems distributed

among several computers interconnected with each other, allowing

for the continuity of communications and operations. To always be

available and allow the connection

of heterogeneous systems, two

features were needed: simplicity

and standardization. Simplicity

is the key to facilitating the

connection of anything, and

adherence to standards is

necessary to allow interoperability,

in addition to communication and

information sharing.

In the nineties, with the creation

of friendlier ways of interaction,

such as the World Wide Web

(WWW) and also with the advent of

software browsers, everyone, and not only academic researchers,

happened to have access to the facilities provided by the Internet.

The first big news of that moment was the creation of websites,

such as those provided by companies, banks and newspapers.

Users, who hired Internet service providers (ISPs), began to access

information from around the world, enter virtual museums, read

news in real time from anywhere and also use other applications

such as chat and email.

In 1997, IBM created a strategy for using the Internet as a business

platform (e-business), which helped to consolidate the great turn

of the Internet for the business world, when companies began to

exploit the Internet to do business and increase profits. Currently

the Internet is being exploited intensely for collaboration through

social networks, blogs, chats, twitter, etc. Petabytes of data are

generated every day by numerous applications, causing an

explosion of information.

New uses for the Internet continue to surge, which go beyond

the connection between people and / or computers. There is

already nearly a trillion things connected on the network, which

enables applications and usage in our lives once unimaginable,

based on event monitoring data, received directly from sensors

in real time. Making use of economically viable and compatible

technologies, these sensors installed in equipment, packaging,

buildings, products, stock, pacemakers, watches and others

use microchips that can capture

information from multiple sub-

systems. This information is sent

to central systems to support

decision-making and, eventually,

action based on events of the

monitored objects.

The Internet of Things is creating a

network of identifiable objects that

can inter-operate with each other

(what has been called Machine to

Machine or M2M) and with data

centers and their computational

clouds. By merging the digital with the physical worlds, it is

allowing objects to share information about the environment in

which they find themselves and react autonomously to the events,

influencing or modifying the same processes in which they are

inserted, without the need of human intervention.

Such solutions have applicability in various sectors of society and

enable the emergence of innovative business models, based on

a new world; instrumented, interconnected and intelligent. This

is the Internet of stuff, the Internet of Things.

For further information: http://www.youtube.com/watch?v=sfEbMV295Kk

http://www.ibm.com/smarterplanet/

http://en.wikipedia.org/wiki/Internet_of_Things

Page 31: Transformation and Change

31

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SPECIAL IBM CENTENARY: THE SPACE PROGRAM AND INFORMATION TECHNOLOGYAgostinho Villela

When we contemplate the Moon in the sky, it is hard to imagine

how man has managed to get to this satellite. The Moon is over

380 thousand kilometers away from Earth, which means more

than 10 times the distance of the farthest artificial satellites and

about 400 times further than the maximum range of the space

shuttles. It is even harder to imagine when we consider that such

a feat is now more than 40 years old, the NASA Apollo XI mission,

at a time when the most powerful computers had less processing

power and memory than the most basic cell phone has today.

The Apollo XI mission was part of the American space program,

which was started in 1958 as a reaction to the launch of the

Sputnik I and II satellites by the then rival Soviet Union, starting

the space race during the Cold War. Over time it was subdivided

into several programs, making the projects Mercury, Gemini and

Apollo the first manned trips.

The Mercury project was started in 1959 and lasted until 1963.

The primary goal was to put a man into orbit around the Earth

and it consisted of 26 missions.

Between 1965 and 1966 project Gemini was executed. The focus,

in this case, was to develop necessary techniques for complex

journeys into space. It consisted of 10 missions and had events

like “spacewalks” and “rendezvous” between spacecrafts.

The Apollo program, which had the goal to put a man on the Moon

by the end of the Decade of the 60s, began in 1961 and had a

big boost with the famous speech of the then President John

Kennedy to the American Congress, pronounced days after, and

in response to, the success of the first manned flight into space,

with the Soviet cosmonaut Yuri Gagarin. At its peak, the Apollo

program came to employ 400 thousand people involved in 20

thousand entities, including Government, companies, universities

and research centers, having spent an estimated US$ 24 billion

at the time (something like US$ 150 billion today).

The space program required the state of art in computer science

at that time, pushing the limits of technology and contributing

significantly to its progress. Advances in microelectronics

and the hardware and software architecture of the systems

developed to design and control the spacecrafts and their

crew were substantial.

IBM’s participation in this context was always very intense,

being also considered an integral part of the American space

program. From the outset, IBM provided computers (IBM 70x

family) to track satellites, both the Soviet Sputniks, the American

Explorer-1, (first artificial satellite of the United States) and Echo-1,

(first communications satellite in the world). In the mid-sixties,

the IBM 7090 computer family helped NASA to handle the first

manned missions. And, from 1964, in addition to providing S/360

computers to design, track and control spacecrafts, IBM went on

to supply embedded computers for navigation and monitoring,

such as the UI systems (Instrument Units) of the Saturn rockets,

contributing decisively to the success of the first manned flight

that landed on the Moon in July 1969.

Few events contributed so intensely to innovation and the

advancement of information technology as the Space Program,

which still continues, in the form of space shuttles, probes,

space telescopes, as well as the International Space Station and,

who knows, a manned mission to Mars. Technologies such as

integrated circuits, solar panels and fuel cells would not exist

or would have taken more to advance if there had not been the

challenge of the conquest of space. And no other information

technology company has been as much of a protagonist in

this process as IBM.

For further information: www.ibm.com/ibm100/us/en/icons/apollo/

www.ibm.com/ibm/ideasfromibm/us/apollo/20090720

Page 32: Transformation and Change

32

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

EFFICIENT COLLABORATION IN A SMART PLANETLenilson Vilas Boas

In our everyday professional work, we access emails and websites,

we use blogs, Twitter, social networks, instant messaging,

smartphones, video conferencing, online document editing

and sharing as well as several other collaboration tools. These

technologies allow us to perform more and more activities,

regardless of our location and they influence our behavior.

On the other hand, organizations and society also require

more agility; whether at work or in your personal life. But how

can you increase productivity without optimizing or reducing

some activities?

It is in this context that collaborative tools

can be great allies, reducing the amount of

applications that we have to manage and

making our interaction with the equipment

more responsive and intuitive. Productivity

becomes directly proportional to the ease

of use of these tools, causing a change that

directly influences our interaction with the

devices and applications, through which

we continuously receive and send information.

Intelligent collaboration does not depend only on technology but

also on a cultural change, where we move from an individualistic

stance to another, more collaborative stance. This new approach

integrates people and society through equipment and systems,

which are considered real “companions”, indispensable for

communication and exchange of information in our daily life.

We can further improve collaboration and increase productivity

with the use of contextual data. This data considers where the

user is located, with whom they are interacting or if you are in a

special situation, e.g., a situation of danger. This now appears

in context aware computing (context-aware computing) with

applications that utilize and make decisions based on the

environment (context) in which they are operating at any given

time, considering the location and the situation around the user.

In other words, context-aware computing considers implicit

entries that describe the situation and the characteristics of the

environment around you. The origin of the contextual data is

from location (GPS) indicators, temperature and light sensors,

date and time, computer networks monitors, services status and

others. The integration of notebooks, mobile phones, sensors

and various other devices to the physical environment enables

intelligent collaboration, allowing the applications to adapt to

the conditions and limitations of the users.

An example of this is a context-aware mobile

phone, able to change automatically to

“vibrate” mode instead of “play”, depending

on the time or location.

With the future implementation of IPv6, the

new version of Internet communication

protocol, and its vast addressability of

network equipment, collaboration and

integration will be even greater, because

there will be many more devices connected and able to interact

and exchange information on a global scale. It will be possible,

for example, to automatically identify and combat emergencies

such as fires, explosions, toxic leak, traffic monitoring and even

inform your doctor about test results or some accident you may

have been in.

Virtually any object, with which human beings may or may not

interact, will be able to exchange information with other equipment

and people, which will possibly increase our efficiency in the

execution of day-to-day tasks. All this without anyone having to

press any buttons.

For further information: http://www.ubiq.com/hypertext/weiser/SciAmDraft3.html

http://www.hardware.com.br/artigos/computacao-ubiqua/

Page 33: Transformation and Change

33

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SEEING THE WORLD BETTERCarlos Eduardo Abramo Pinto

What do a laser developed for manufacturing of electronic chips,

a turkey dinner for Thanksgiving and three scientists have in

common? If you, like me and more than 20 million people in the

world, have had eye correction surgery in the past 20 years, then

this unlikely combination of technology, a family reunion and the

quest for innovation is part of your life.

Created in the 1970s by Russian scientists and developed in

subsequent years by various groups, including the US naval

research laboratory, the excimer laser is designed for manufacture

of electronic devices, a purpose for which it is still being used today.

The term excimer is derived from the expression excited boy

(excited Dimer), which reflects how the laser works: through

the electronic stimulation of reactive gases such as chlorine

and fluoride, mixed with inert gases such as argon, krypton or

xenon, ultraviolet light is produced. This light can make changes,

as needed in various materials at the microscopic level. The

excimer laser is known as a cold laser, as it does not produce

heat or damage to the region next to the application of the laser,.

In the early eighties, three scientists at IBM’s Thomas J. Watson

research laboratory in the United States – James Wynne, Samuel

Blum and Rangaswamy Srinivasan – were researching new

uses of the excimer laser, that had been recently acquired by

the laboratory. Based on the characteristics of the excimer laser

described above, the scientists wondered what would be the

outcome of their application on human or animal tissues. The

first tests were performed on leftover turkey from a Thanksgiving

dinner held by one of the scientists, with highly promising results,

in which extremely accurate cuts into the meat, bones and

turkey cartilage were made without damage to the area near the

laser application.

As a way to demonstrate the result, the team produced an enlarged

image of a strand of human hair with the word IBM etched into

it by the excimer laser.

This image was published around the world and started a number

of discussions about the use of this discovery in different areas

of medicine, such as brain surgery, orthodontics, orthopedics

and dermatology. At the same time, ophthalmic surgeons were

looking for alternatives to the existing techniques of eye surgery.

The existing procedure, using a scalpel, was not precise enough,

causing permanent damage to the cornea and requiring a long

recovery time for patients.

Through collaborative research between IBM and ophthalmologists

from the Columbia Presbyterian Medical Center, a 1983 study

examined the use of the excimer laser for human corneal reshaping.

This study initiated a global research program, culminating in 1995

with the US authorities approving the first commercial system of

laser-based refractive surgeries.

Today the two main types of corneal surgery by excimer laser

surgeries are photo therapeutic, known as PTK, used to remove

corneal tissue to correct eye diseases, such as corneal ulcers,

and photo refractive surgeries, used to remove corneal tissue

to correct refractive problems, such as myopia, hyperopia, and

astigmatism. The main techniques of photo refractive surgeries

are PRK and LASIK. The PRK technique requires long recovery

times, estimated at 4 to 8 weeks, where it is necessary to use

contact lenses to protect the cornea in the early days of recovery.

The LASIK technique (laser-assisted in-situ keratomileusis) is

the most popular eye surgery in the world, as it enables rapid

recovery for patients, estimated at one to two days, does not

require the use of contact lenses for the recovery process, and

has the highest percentage of success,eliminating the need for

glasses and contact lenses in more than 90% of cases.

Who knew that a discovery like this started from the curiosity

of three scientists and a simple turkey? It is innovation at the

service of society.

For further information: http://www.ibm.com/ibm100/us/en/icons/excimer/

Page 34: Transformation and Change

34

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

WE LIVE IN A WORLD INCREASINGLY INSTRUMENTEDAntônio Gaspar

The world is becoming increasingly instrumented,

interconnected and intelligent. In this context,

the new generations of instrumentation tech-

nologies will become the foundations of a

Smarter Planet. I say “new” but this concept

began with the industrial revolution.

Until the 1980s, the word instrumentation referred to disparate

concepts that included something related to music, surgery

(surgical instrumentation) and, less well known, something called

industrial instrumentation. It was this concept that evolved to

become a constant and omnipresent part of our lives.

In industrial plants, the role is related to automation and control

systems, which consist of three basic components: sensors,

controllers and actuators.

Sensors are responsible for capturing the so-called “action

variables” (temperature, level, pressure, etc), acting as transducers,

sensors convert the physical dynamics of these variables

into telemetry signals and transmit them using standardized

communication protocols, in either analog format (which is still

widely used) or digital format.

Controllers are the receivers of the telemetry signals from the

sensors and are responsible for applying correction algorithms

(the basic principle of “measurement, comparison, computing

and correction”), using pre-set benchmarks for decision-making.

Actuators are devices that receive commands from the controllers.

Their role is to act on a “manipulated variable” (e.g. water flow), in

order to achieve results on a “measured variable” to be controlled

(e.g. water level in a boiler).

These systems have now surpassed the borders of industrial

installations. The engineering of materials and miniaturization

of electronic circuits – today the planet has more than 1 billion

transistors per human being – brought these basic concepts into

devices that are much closer to us than we realize. The best

example of this is in automobiles.

In the nineties, there were cars with electronic injection in Brazil,

in which electromechanical systems were replaced by a grid of

sensors (temperature, speed, rotation, etc.), actuators (nozzles,

etc.) and, not least, the central control module, a real embedded

microcomputer (nothing to do with onboard computers). Today,

your car can have more than 100 million lines of code embedded.

By the way, have you ever thought of doing a software upgrade

on it? This is what happens in some of the recalls performed by

the automobile manufacturers.

This instrumentation went beyond automotive applications and

began to take place in urban daily life. Traditional weather stations

have become part of a grid of urban climate control. Data about

temperature, barometric pressure, humidity, wind speed and

direction are transmitted to the meteorological control centres by

telemetry via 3G, wi-fi, cable or radio. On roads with high traffic

volumes, surveillance cameras are no longer simply transmitting

images and are starting to become data sources for intelligent

digital surveillance systems that are able to identify patterns of

events, alert and then make decisions about traffic control. Other

systems are equipped with facial recognition algorithms, able

to accurately identify people listed in a database and through

attached microphones, audio analysis technologies are able to

identify shots from fireguns and issue alerts to the police.

Ultimately, for those who watched or read Minority Report or

1984, we see fiction becoming reality. Away from screens and

books, it must be our goal to apply technology to the common

good, aiming for a better and smarter planet.

For further information: http://www.ibm.com/smarterplanet

Page 35: Transformation and Change

35

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SPECIAL IBM CENTENARY: ELEMENTARY, MY DEAR WATSON!José Luis Spagnuolo

From time to time a shift happens in the IT industry, changing

the entire future and modifying the perspective of the past.

This occurred, for example, with the introduction of IBM S/360,

emergence of personal computing, and also with dawn

of the internet.

At the beginning of this year, we witnessed to what can become

a huge transformation in how computers can participate with

human life. A computer named Watson participated in the TV show

“Jeopardy”, a game which tests one’s general knowledge, against

the two greatest champions in the history of the program, in the

previous years. Watson was able to select topics, understand

natural language and, in matter of seconds, tried to answer

the questions accurately before it’s human opponents could

answer them.

Watson’s victory was overwhelming. What most caught the viewer’s

attention was the vastness of knowledge and skill of interpretation

required to succeed in this type of game. Because once the game

starts, Watson cannot contact or access external information,

and it’s programmers can neither touch it nor access it remotely.

As amazing as it might seem, this machine does not have any

special hardware or software. It is based on the IBM Power 7

technology, running on Linux, with common memory chips

and standard disks like IBM DS8000, that are used on a large

scale by several IBM customers around the world. What can be

highlighted is Watson’s processor capacity. It has 90 Power 750

servers configured in cluster. Each server has 32 processing

cores operating at 3.55 GHz POWER7 and has a RAM of 16

terabytes. This resulted in an extraordinary processing capacity

of 80 teraflops (trillion operations per second) which is being

used for the understanding, searching, retrieving, sorting and

presentation of information.

One of the greatest advantage of Watson is the DeepQA, a

probabilistic system architecture invented by IBM, which massively

uses parallel analytical processing algorithms. More than one

hundred different techniques were used to analyze the natural

language, identify sources of information, generate hypotheses,

find and sort the evidence, and combine and prioritize responses.

The way these techniques were combined in DeepQA brought

clarity, precision and agility in meeting the answers. Watson

represented a real quantum advance in the design, application

and development of Artificial Intelligence. After the positive

response received from the game, the practical applications in

society have already begun to rise.

In the field of medicine, for example, Watson is making a possible

revolution in diagnosis and in prescription of treatments. Due to

the millions of possible disorders and symptoms, a doctor hits,

on average, only 50% of its diagnostics and can remember only

20% of best practices to ensure correct treatment. The health

care industry believes that Watson could increase the number

of hits which could average up to 90% in diagnosis and therapy.

It could also help in improving the lives of patients, reducing

costs for hospitals and Governments, which in turn will enable a

larger part of the population, gain access to health care industry,

ensuring best quality.

Another example is the practical application of Watson to act as

a central service. Through the understanding of natural language

and access to users’ data, Watson could answer questions and

raise necessary actions to satisfy clients quickly and accurately.

We are just beginning to assess the transformations that

Watson could generate in our lives in the coming years. But one

conclusion has been certainly established: the IT industry will never

be the same.

For further information: http://www.ibm.com/ibm100/us/en/icons/watson/

http://www.ibm.com/innovation/us/watson/

http://ibm.com/systems/power/advantages/watson

Page 36: Transformation and Change

36

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

MULTI-CORE REVOLUTION IMPACTS IN SOFTWARE DEVELOPING Thadeu de Russo e Carmo

In recent years, due to physical

issues, processor speed is no

longer increasing as the quantity

of transistors increases. For that

reason, alongside other limitations,

the search for performance gains has

led to new approaches, including the

construction of multi-core processors.

Those processors popularly known as

multi-core processors, are becoming more and more common

in personal computers, not only in desktops and notebooks,

but also in tablets and games consoles. For example, the Cell

processor developed jointly by IBM, Toshiba and Sony is present

in the Sony Play Station 3, in addition, the Microsoft Xbox 360

and Nintendo Wii consoles also use processors based on IBM

Power multi-core technology.

Multi-core processors have a significant impact on the way

programs are written, due to the difficulty of increasing their

clock performance. If the programs are to take advantage of the

performance gain they must be written in such a way that they

can be concurrently distributed across the processor cores by

the operating system.

Writing programs to be executed concurrently is not an easy

task. We may think of them as consisting of several other smaller

programs, which will probably share information, leading to

investigation of how to synchronize the read and write access

to that information. Moreover, with those actions occurring in

parallel, it is almost impossible to know for certain the order in

which they will be executed. Furthermore, most of the current

programming languages and systems development environments

are not adequate enough for developing concurrent systems,

which makes it even more difficult.

For languages such as Java, C + + and C #, the access control

to a shared memory region is made through a set of permits

(known as “semaphores”). However, the use of those latches,

besides being complicated, has limitations and may create

deadlocks, preventing the system from continuing to execute.

The functional programming paradigm, which for a long time

was considered too theoretical for commercial applications

development, has been gaining a greater interest in the market

in the last few years. This interest is due to functional languages

such as Erlang and Haskell, both of which have the appropriate

characteristics for the development of concurrent systems.

They are different from imperative languages, which favor data

mutability and functional languages, which are based on function

application and recursion. As an example, we can perform

a loop without changing the value of any variable, including

those that control the loop. In addition, there are functional and

concurrent languages such as Scala and Closure, which run

on JVMs (Java Virtual Machines), and naturally interact with

the Java platform.

We are experiencing a paradigm change in the same way as

happened with object orientation. The development of non-

sequential algorithms is becoming more and more common.

Abstractions of concurrent programming (i.e. the use of actors

and transactional memory in software) are already present in

most programming languages. The functional languages are

increasingly being used by Corporations, and the way that

system developers address problems is also changing.

For further information: http://www.gotw.ca/publications/concurrency-ddj.htm

http://www.erlang.org

http://haskell.org

Page 37: Transformation and Change

37

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SPECIAL IBM CENTENARY: THE IBM AND THE INTERNETLeônidas Vieira Lisboa

Several events have contributed to the evolution of the Internet,

transforming it into the global network that today carries a variety

of media and services and has become an icon of the last 100

years, changing corporate businesses and people’s lives.

To better understand this icon, it is worth briefly looking at

some important point in its history and to see how IBM was

involved in them.

1. ARPANET (Advanced Research Project Agency Network-1969)

and NSFNET (National Science Foundation Network-1986)

were pioneering networks in the United States, that connected

computers over long distances, for military and scientific projects

respectively. The ARPANET introduced some important concepts

such as redundancy and packet transmission, whereas the

NFSNET, which subsequently absorbed the ARPANET network

nodes when that was disbanded, created the backbone that

gave rise to the Internet. IBM participated actively in the NSFNET

in conjunction with the MCI operator, the State of Michigan

and a consortium of American universities. Many innovative

technologies and new products were developed using the TCP/

IP Protocol, under a strong project management discipline. At

the beginning the NSFNET connected around 170 U.S. networks

and by 1995 already had 50 million users in 93 countries. At

that point, the network was commercially transitioned to the

telecommunications carriers.

2. When the IBM Personal Computer (IBM 5150) was announced

in 1981, it became the leading product in the transformation

that extended the frontiers of computer science to the general

public. The presence of the PC in homes, schools and businesses

made it the device that popularized the Internet in the following

decade. The IBM PC brought the concept of open architecture

to micro computers by publishing your project and allowing

other companies to create software and peripherals that were

compatible with this platform. Most personal computers still follow

this open standard. For this reason, the IBM PC was a milestone

in the history of personal computers, the machines that brought

the use of Internet services, such as e-mail and the World Wide

Web to the masses.

3. In the mid-1990s the term “e-business” represented the

materialization of an IBM strategy to demonstrate how to gather

market services and technology in order to do business over

the Internet, using a “network-centric” vision, focused on the

Web. This has possibly been the most important contribution

from IBM in the evolution of the Internet, raising it to the status

of the global infrastructure needed for 21st century businesses.

It was the beginning of the era of electronic transactions via the

Internet, that are so common today in banks and virtual stores.

IBM has created a number of technologies that have helped

the Internet become established as an essential tool for the

information age. For example, the WebSphere software platform,

which allowed the integration of several systems to the Web or

the World Community Grid, which showed how the Internet can

be intelligently applied to large scale projects supporting global

social initiatives.

If it is true that reflecting over the past makes it possible to better

plan for the future, then reflection on IBM’s contributions to the

evolution of the Internet reminds us not only of the innovations

already introduced and their impacts, but also allows us to envision

a future of progress and the benefits that technology can bring

to humanity.

For further information: http://www.ibm.com/ibm100/us/en/icons/internetrise/

http://www.ibm.com/ibm100/us/en/icons/worldgrid/

http://www.ibm.com/ibm100/us/en/icons/ebusiness/

Page 38: Transformation and Change

38

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

GOVERNANCE, RISK AND CONFORMITYHugo Leonardo Sousa Farias

Over the years, Information Technology (IT) has become the

backbone for many companies’ business, becoming a competitive

differential, rather than an option. However, this dependence

requires much more attention. For instance, to ensure that IT

investments generate value to business; to guarantee that IT

processes are efficient; to make sure that the availability of the

operations is maintained, and also, it is necessary to adhere to

contractual processes, to regulatory mechanisms and to legislation.

To address those challenges, companies are more and more calling

upon Governing models and frameworks — Risk Management

and Compliance, or simply GRC. Those terms have been seen

individually for many years. Should they observe each other

from their own characteristics, they would rarely put their efforts,

resources, processes and systems together to achieve common

goals. Fortunately, that is changing, since treating GRC from

an integrated perspective, has been attracting the attention of

many companies.

According to the ITGI (IT Governance Institute), Governance is the

set of responsibilities and practices exercised by executives and

by the high management of the company with the objective of

providing strategic direction, ensuring that the company objectives

are achieved, and that the resources are used in a responsible way.

There are international standards and good practice guides of

IT governance that can be used as reference, such as COBIT

(Control Objectives for Information and Related Technology),

a framework for IT best practices; ITIL (IT Infrastructure Library),

a set of best practices for managing IT services; ISO/IEC

27001, a standard system management of information security,

amongst others.

The Risk Management definition of the Risk IT Framework,

determines that such activity should involve all business units

in the organization to provide a comprehensive view of all IT-

related risks. The Enterprise Risk Management structure provides

greater alignment with business, IT processes efficiency, and

more operation availabilities with a consequent reduction of

incidents. These all contribute to business value. In service

provider companies, risk management may represent new

business opportunities.

Finally, Conformity is the act of joining and demonstrating

acceptance to laws and external regulations, as well as to

corporate policies and procedures. Internal controls should be

implemented to ensure operational efficiency and reliability of

financial reports. The “non- conformities” can adversely effect a

company’s cost, can generate financial impact, and negatively

affect the company image.

An Advanced Market Research survey with companies in the

United States, estimated in U.S. $ 29.8 billion the investments in

GRC in 2010, an increase of 3.9% over the previous year

IT Risk Management and Compliance should not be treated

as isolated disciplines, for the centralized management of

these activities is an irreversible trend. Furthermore, GRC is an

integrating part of corporate management and it provides strategic

alignment with the business and value delivery. It also provides

better resource management and IT performance.

With the growing demand in the market (external and internal)

for transparency and responsibility, the improvements in GRC

represent a competitive advantage that can provide growth

and new markets for companies. It is the convergence of three

knowledge areas making the difference. 1 + 1 + 1 in this case

it is much more than 3.

For further information: http://pt.wikipedia.org/wiki/GRC

http://www.isaca.org/Knowledge-Center/

Page 39: Transformation and Change

39

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SPECIAL IBM CENTENARY: IBM TAPE: BREAKING BARRIERS IN DATA STORAGEJoão Marcos Leite

It is common fact that the volume of digital data generated in the

world grows exponentially. The number of sources of information

is increasing, because there are “net” connected computers

and devices in almost every home, schools and businesses.

Should we consider the smartphones, tablets, game consoles

and other electronic devices that exist in our everyday lives, the

list of potential data generators becomes quite extensive.

That data, when business-related, is fundamental to the survival

of businesses regardless of their size. One could ask: where

can we store so much information? And if that data, for some

reason is lost, how can it be quickly retrieved, with minimal

business impact?

For over half a century, an IBM invention answers those questions:

magnetic tape drives. They have played an essential role in

enterprise data protection, especially those that need to be

retained for long periods of time, at a lower cost than the magnetic

disk storage.

The first commercial model announced by IBM in 1952, the 726

Magnetic Tape Recorder, marked the transition from the data

storage on punched cards to a magnetic media. At its beginning,

the biggest challenge was to convince users, whom could then

visually inspect the records through the perforations of the cards,

to accept a new physical media without seeing the data with a

naked eye. Only after getting used to this broken paradigm of

digital storage that the development of other magnetic devices,

such as IBM RAMAC disks, and everything else that came

since then, could start, strongly boosting the development of

Information Technology.

Various technologies created by IBM for the tape drives were

subsequently adopted in magnetic disks, such as thin-film heads,

the intermediate cache memory used to enhance data transfer

performance with the servers, and the adoption of microcode

inside the device. So, the features primarily developed for the tape

drives also helped significantly with the technological evolution

of disk subsystems.

The influence the data storage on magnetic tape brought to the

computer world, goes beyond that: it created the concept of

tiered storage, with online and offline data at variable cost; it led

to the birth of the most important data management application:

backup/restore; it assisted in data portability for remote protection

and integration between companies; and it allowed for long-term

data archiving for regulations of information retention compliance.

In these almost sixty years in which the capacity jumped from just

2 MB per tape reel (in IBM model 726) to 4 TB per cartridge in the

newest model (IBM TS1140), and the data transfer rate went from

12.5 kB/s to 800 MB/s (not considering compression), there were

many achievements from the engineers who participated in the

development of this technology. This was done with innovative

and revolutionary ideas, which brought to IT, the possibility to

increasingly process and protect information, and intangible

assets of great value to companies.

Tape drives have evolved in various ways. This technology still

remains the data storage platform with the best cost/benefit ratio,

so flexible and scalable to meet the highest business application

demands, breaking barriers each new generation. This technology

has persisted for almost sixty years, and it promises even more

for at least the next forty years.

For further information: www.ibm.com/systems/storage/tape/index.html

www.ibm.com/ibm/history/exhibits/storage/storage_fifty.html

Page 40: Transformation and Change

40

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE NEW MILLENNIUM BUG?Sergio Varga

The new version of Internet Protocol (IP) to replace IPv4, will be

the IPv6. This change will allow the connection of approximately

3.4 x1038 addresses, instead of the 4 billion addresses supported

today. As described by Luis Espinola in the first book of Mini

Papers, the IPv4 end eventually came even before 2012, because

in February 2011, the last free blocks of IPv4 addresses were

allocated by the Internet Assigned Numbers Authority (IANA).

It means that any institution that requires a new official IP address

will only be able to obtain it from one of the five regional departments

that may still hold such addresses. With the completion of this

reservation, companies will need to seek alternatives

such as outsourcing, co-location (hosting computers

in another company), etc.

Some believe that this will be the new Millennium

Bug. IT professionals who were in the marketplace

before 2000 (Y2K), should remember the frisson that

occurred in the last years before 2000, particularly in

late 1999. In most systems, the year was encoded

with two digits and this could cause huge problems

for those who used dates to perform calculations.

For example, subtracting 99 from 00 was obviously

different from subtracting 1999 from 2000. So, it

was necessary to increase the field “year” to four

digits, which caused a frantic rush to change legacy

systems at the end of the previous century. In the end, there

was no news of any huge problems occurring after that awaited

New Year’s Eve.

But what is happening today? Almost all Internet users are using

IPv4 without the possibility of growth on their current address

space. Therefore, it is imperative to actually start the migration

to IPv6. According to the survey made by Arbor Networks, the

volume of IPv6 traffic in 2008 was of the magnitude of 0.0026%

of the total volume, and in the following year it still remained in

the same rate. There are still thousands of applications that use

IPv4, even though corporates are already delivering products

compatible with IPv6. But what about the programs, applications,

systems and websites that still do not support IPv6? We can see

a huge opportunity for services, hardware and software sales,

consulting, development and training to support companies

that will need to adept their applications to the new protocol. We

shouldn’t forget the potential that this conversion will promote

since every device that supports the IP protocol, such as mobile

phones, televisions, computers, electronics, gadgets and whatever

else you can imagine, will have to use the new protocol. It opens,

then, an unthinkable range of opportunities.

A good solution would be to convert the applications

to IPv6 by alternative ways, using resources such

as proxies, gateways and NAT (Network Address

Translation), mapping invalid addresses for official

addresses, however this would imply a possible

loss of application performance caused by the

creation of additional traffic hops.

The IPv4 protocol has lasted about 30 years, and

at this moment, we can not even consider IPv6

reaching a ceiling, as in this case, even if each of

the 7 billion inhabitants on the planet had 50 devices

with Internet access, there would still be addresses

available. But, at the pace of the technological

advance, it would not be a surprise if within 80

years for example, the addresses were again exhausted.

Different from the Y2K bug, the adoption of the Protocol IPv6 is

a less critical as there is sufficient time for migration. It is likely

that entertainment and marketing industries will drive this change

because they are the ones that need to reach a large number

of consumers, and the IPv6 can be the solution to streamline

this process.

For further information: http://inetcore.com/project/ipv4ec/index_en.html

http://validador.ipv6.br/index.php?site=www.ipv6.br&lang=pt

Page 41: Transformation and Change

41

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

MAINTENANCE OF SYSTEMS AT THE SPEED OF BUSINESSLuiz Phelipe A. de Souza

Imagine the following scenario: Christmas is coming, the time

for the largest revenue for a large retail company, and sales

expectations are high. This company, by analyzing its market,

realizes that competitor’s actions started to impact on their results.

Then, following the trend, the “numbers” expected by the end

of the year may be compromised. The strategy needs to be

reviewed. Business rules need to be modified to try to reverse

the picture. IT Staff need to be involved. The systems that support

the operation of the company need to consider new rules, and

the need to change source code. Overwhelmed by several

other demands, the IT staff provide deadlines that do not meet

the needs of users.

A similar scenario can be identified in organizations that today.

depend on IT systems to work. Such deadlocks and questions

of how IT systems can be more flexible enough to ensure that

the agility which business area needs, can be equated with a

separation of logic and rules that underlie the business (which

would normally require a substantial maintenance on the part

of end users) from the rest of the functionality of the system. The

component where the rules can be implemented and maintained

to provide easy handling mechanisms, including people not

directly involved in the development of the other components

of the application.

Currently, the IT market – customers and suppliers – has adopted

this approach in management tools of business rules (BRMS-

Business Rules Management Systems). Generally speaking, the

original idea of this type of tool is to provide a controlled repository

where all the business rules can be created, maintained and

read by people involved in the definition of these rules (and

invariably by IT professionals, connoisseurs of programming

languages). Still, all this set of published rules, can be at any

moment, consumed by legacy systems, independent of the

technology in which they were originally developed.

Obviously, some requirements are prime to the correct operation

of a system of business rules. The first — and most important — is

in the form of writing those rules. For that users “non-technical”

can write rules that can be interpreted by IT systems, the rule

management tools provide mechanisms and functionalities

for creating a vocabulary for writing business rules.

The writing of a rule, with a vocabulary built from the jargon of

the Organization, must be something as simple and natural

as: “if driver’s age is less than 20 years then consider the driver

as inexperienced driver”.

A fundamental point also considered by specialists in this type of

technology, is governance and access control to artifacts created

from the tool. For an adoption with low risks to the company’s

business, access should be allowed (or denied) by the time of

publication when the changes in rule would take effect (being

consumed by legacy systems). This type of functionality allows a

safer adoption by avoiding rule changes with immediate impact

on the Organization’s systems.

The implementation of a business rules tool in complex systems

is not one of the simplest tasks. The effort of extracting business

logic of old systems, regulations and standards or even the head

of users, requires a significant analysis and attention. The benefits

can be large. The measurement gains can be made based on

speed as the business responds to urgent demands, or even in

the IT activities that can have its backlog of maintenance reduced.

Which application developer who never heard of a change

order with a comment like: “should be fast. Just include an IF...”?

For further information: http://en.wikipedia.org/wiki/Business_rules

http://www.businessrulesgroup.org

http://www.brcommunity.com

Page 42: Transformation and Change

42

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SCALABILITY AND MANAGEMENT IN CLOUD COMPUTINGEdivaldo de Araujo Filho

In the Cloud Computing model, computational resources are

physically and virtually distributed on several locations, becoming

transparent to users where data is stored and applications

processed. The growing usage of this model is modifying the

current business scenario and challenging experts and IT architects

in the construction of this new reality, while searching for lower

costs, performance improvement, and an increase on security

and scalability of information systems.

The concept of Cloud has featured for some time on the market,

and it already is a reality for many companies, mainly small and

medium-sized enterprises. They have

already migrated all or part of their IT

infrastructure to the cloud, acquiring

technological solutions to support

their business as a service. In large

corporations, the CIOs are also seeking

a virtualized infrastructure, investing,

most of the times, in private clouds

within their own IT environments.

As the Cloud offers high scalability,

it became a viable solution to inte-

lligently address the demand for automation required by

business, associated with an effective and optimized utilization

of computational resources. The theme of scalability has already

been raised on Computational paradigm of Grid Computing,

which was concerned with the smart usage

of IT infrastructure, especially with respect to expansion capability

(excess) and reduction (shortage) of technological resources,

according with the systems in operation demand..

Within the Cloud Computing scenario, scalability provides a new

concept of virtual elastic growth instead of physical Data Centers.

For customers, this new way of acquiring applications and data

becomes more convenient, as volumes grow and decrease

according to the situation. Scalability highlights a series of gains in

IT Infrastructure, especially related to cost and to the dynamic way

of expanding and retracting the use of computational resources

associated with customer needs, as precise and transparent

as possible.

IT infrastructures traditional management, has always presented

a centralized and physical control of corporate computing

installations. With the advent of Cloud, IT is being redesigned

in order to meet business requirements. Management of this

new environment is facing the challenge of not only maintaining

operational resources assets, but also on redefining a template

for monitoring in a hybrid environment, with part from traditional

IT, and part virtualized in the cloud,

whether public or private.

Managing IT with Cloud Computing

requires a paradigm shift in which the

configuration items grow or decrease

on an accelerated and diversified way.

When using public clouds, besides

the physical location of assets being

unknown, the usage of a virtualized and

distributed model requires autonomic

and decentralized management focu-

sed on mission-critical applications and with direct impact on

the core business and customer services.

The increasing demand not only for infrastructure, but also for

applications in the cloud, promotes investments in automation and

virtualization of IT environments, whether in the Cloud services

providers or in large corporations which are seeking private clouds.

The search for Cloud is a way for IT to meet business growth,

associated with more and more complex Data Centers, and still

remain in compliance with the equipment consolidation, space-

saving and especially with consumption reduction of resources

such as power and cooling.

For further information:www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-28.pdf

www.ibm.com/cloud-computing/

Page 43: Transformation and Change

43

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE EVOLUTION OF THE WEB IN BUSINESS MANAGEMENTMárcio Valverde

As the reader of the TLC-BR Mini Papers may notice, there is a

clear evolution in Web technologies, strongly aimed at providing

a richer, smarter and more interactive experience to users.

In a growing and dynamic market like ours today, where users

require more speed and ease of interaction with Web applications,

it’s natural to see the emergence of new technologies to meet

these demands.

The Semantic Web, for example, seeks to organize the knowledge

stored in files and web pages. This concept comes from the

understanding of human language by computer systems in the

recovery of the information. Some technology companies already

offer semantic web features in their products to optimize the flow

of information and generate smarter search results, enabling

their clients more accuracy and agility in their decision-making.

In another path of Web development, the quest for making available

data tailored to the needs of each user and the transformation of

data into knowledge brought several companies (such as Apple,

Google, IBM, Mozilla, etc.), together as a consortium to collaborate

in the construction of the fifth generation of the most popular

Internet language, HTML. Although designed to be compatible

with existing applications, HTML5 is a more dynamic language

and is able to offer a more structured and safer environment than

its previous versions.

HTML5 simplified scripting, which was complex and detailed

in earlier HTML versions, and also introduced a series of

comprehensive and interesting new features, such as:

1. The possibility of locating services and locations (such as shops, establishments, monuments, etc) close to the geographical position of the user using Geo-location;

2. The use of Speech Input, for making applications accessible by users with special needs;

3. Greater speed and traffic throughput for audio and video streams;

4. Inclusion of threads, known as WebWorks, which allow running multiple simultaneous activities on a web page, thus greatly reducing processing and response times.

In this evolving environment, many companies are already

beginning to rethink the way they build their Web applications

and how they will distribute these new services. The possibilities

range from the use of smart phones, tablets, interactive digital

TVs, social networks and even cloud computing. This will allow

companies to use the Web as a platform for business and improve

relationship between consumers and suppliers, thus increasing

the potential of business opportunities on a global scale.

We’re not facing a revolution, but an evolution in the way we do

business, and we should be aware of this “Brave New World” which

is shaping up as a major component in building a smarter planet,

capable of connecting people and markets at a whole new level.

For further information: http://www.youtube.com/watch?v=DHya_zl4kXI

http://www.youtube.com/watch?v=ei_r-WSoqgo

Page 44: Transformation and Change

44

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

FINANCIAL AGILITY IN ITRodrigo Giaffredo

The companies operating in the 21st century have, among others,

the challenge to remain innovative and modern at a time when

the spread of information at high speed and easy access to

technical content leads to the development of a new generation

of creative thinkers.

When it comes to modernization and innovation, technology is

a recurring subject. Although creative ideas do not always turn

into sophisticated technological components, it is a fact that

most automated organizations, whether in their core activities

or in support ones, lead the race for markets.

Traditionally, spending on IT (infor-

mation technology) is considered

an expense. However, young and

profitable companies broke this

paradigm, considering IT spend

as an important investment for the

creation of new markets, product

creation, and maintenance of com-

petitiveness. With this, the role of IT in

corporate financial performance has

been changing from that of a support

function (just a cost center number

and service provider) to a change

agent for the financial success of

the business.

To measure performance of IT in organizations one must understand

that isolated metrics do not tell the whole story. Evaluating the

results of horizontal variations (current period versus prior periods)

or vertical variations (IT spend vs total spending) is not sufficient

to assess the role of technology areas in corporate efficiency.

In the article “IT Key Metrics Data 2011” (Gartner, December

2010), the authors claim that it is necessary to “evaluate the

performance of IT in the context of the organization, in order to

properly communicate the value and significance of practice in

this area in the achievement of results.” Similar opinion is cited

in the report “Global Banking Taxonomy 2010” (IDC Financial

Insight, July of 2010).

Starting from this premise, efficient organizations should assess

the performance of IT, supported on the tripod “IT as % of revenue,

expenditure and manpower”, thus understanding the level of

intensity of the participation of this area in business performance.

Let’s stick to the example “IT spending versus total revenue”,

discussed in the article above, and graphically represent the

comparison through an array, positioning the intersection between

these two pillars in quadrants in the following colors:

1. Yellow: total revenue and expenses with IT move in the same

direction; if the intersection occurs in the upper right quadrant,

spending with IT varies less in % than the revenue, “accelerating”

profitability. In case the intersection

occurs in the lower left quadrant,

reducing % of spending IT must be

greater than revenue, “slowing down”

the loss of margins.

2. Green: revenues grow and expen-

ses with IT decrease. Apparently

perfect, however it is important to

analyze if the IT budget is being

deprecated in the organization (the

so-called “myopia of spending”, and

not savings in fact).

3. Red: critical period in which the

revenue decreases and IT spending

increases, indicating that it is time

to review the budget of the area,

prioritizing creative investments with greater cost-benefit ratio.

This is one of the possibilities for using the multi-dimensional

financial analysis within IT; another example is the Balanced

Scorecard measurement methodology and organizational

performance management through the use of financial, commercial,

internal processes and learning/growth indicators. It is up to

CxO executives (including CIOs) to combine them in order to

generate predictive information about the market and ensure

longevity and agility in the most different contexts.

For further information: http://www.gartner.com/DisplayDocument?id=1495114

http://www.idc-fi.com/getdoc.jsp?containerId=IDC_P12104

Page 45: Transformation and Change

45

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

IT COST MANAGEMENTAnderson Pedrassa

To understand and communicate information technology

productivity in terms relative to other metrics of business is

mandatory, according to Gartner. To deal with the dynamics of

IT investment only as a percentage of revenue, the most widely

used metric, can derail the understanding of important trends

and does not reflect, in fact, IT’s contribution to the results of a

company’s operation.

As a major component of the IT’s productivity equation, IT cost

management’s mission is to measure to manage; measure to

do more with less. Many managers know how much IT operation

costs (how much is paid) but, due to the lack of transparency of

costs, they see it as a black box that

generates significant and growing

expenses. Giving a visibility of these

costs can revolutionize the way

companies consume the resources

(internal and external) and increase

the focus on IT investments which,

in fact, contribute to the business

results of these companies.

To achieve this, an important step is

defining internal processes to identify

and measure the direct and indirect

cost generators. These expenses

and disbursements include staff costs, hardware, software, real

state, contracts, taxes, outsourcing, electrical energy, water,

electricity, telephone, refrigeration, depreciation and amortization.

Some expenses can be directly associated with a system,

application or service. However, shared expenses must follow

another criteria, normally the proportionality of use, in which

systems or clients that consume more shared resources must pay

more. This apportionment elevates the IT cost management maturity

and therefore requires a new metric, called “Standard Cost”, which

defines values for resource or IT services units, generating a price

catalog in which, for example, includes the cost of a processing

minute, a stored gigabyte and a kilobyte transferred in the network.

Other values such as cost per database transaction, by timeout

or deadlock or even for serious programming error, may prove

less efficient applications and large consumers of resources.

The Standard Cost provides a basis for comparison among areas,

business units, geographies, departments and suppliers, with

their respective establishments and accompaniments of cost

targets, and may also be used for supporting budget preparation.

In order to reach the Standard Cost it is required to collect

direct consumption of IT resources, such as operating systems,

database managers, Internet infrastructure, electronic mail

systems, network and print servers and any other system,

application or appliance. Consumption may report processing

time, memory usage, input/output operations (IOPS, Input/Output

Operations Per Second), storage, network traffic, database

operations, among others. In fact,

everything that is consumed can

be registered on file and measured

for purposes of calculating the

Standard Cost.

IT cost management produces data

that, operated with the support of

Business Intelligence (BI) tools,

allow to conduct simulations, make

predictions, support Capacity Pla-

nning and increase operational effi-

ciency. A greater understanding

of the IT costs also points to the

question of comparison between the cost for development versus

the total operating cost of application or system, revealing that

the first will lose importance when the lifecycle of applications

increases to five or ten years, for example.

An effective IT Cost Management helps to show with numbers

the real contribution of information technology to the financial

results of a company. In a time when good drivers have discounts

on car insurance and people with healthy habits have discounts

in their health plan, it makes sense that more efficient systems

should be rewarded somehow.

For further information: http://www.gartner.com/technology/metrics/communicating-it-metrics.jsp

http://www.mckinseyquarterly.com/Unraveling_the_mystery_of_IT_costs_1651

Page 46: Transformation and Change

46

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

FCOE, INTEGRATION OF LAN AND SAN NETWORKSAndré Gustavo Lomônaco

About ten years ago, several articles compared the traditional

telephony systems with the then new telephony systems based

on Internet Protocol (IP). Factors such as low acquisition cost and

reliability were associated with the traditional systems. However,

other factors were attributed to IP telephony systems, such as

ROI (Return of Investment) and cost reduction; the latter to be

achieved through sharing infrastructure already used by the data

network, along with the unification of support staff with knowledge

of both technologies, thus eliminating distinct dedicated teams.

Currently, we have witnessed the convergence of two other

critical technologies - the Local Area Networks (LAN) that use

the Ethernet protocol for sending and receiving data and Storage

Area Networks (SAN) using Fibre Channel (FC) Protocol to convey

commands and data between servers and storage systems. This

integration, grounded in a new protocol dubbed Fibre Channel

over Ethernet (FCoE), could bring, to the Information Technology

area, impacts and benefits similar to what IP telephony has

brought for the last ten years.

Although these distinct networks may be integrated with techniques

that utilize command and data packaging protocols, such as iSCSI,

FCIP, and iFCP, the level of integration and the benefits obtained

through the FCoE Protocol exceed the current integration methods.

This is achieved by sharing in a single physical media both local

networks data traffic as inbound and outbound operations of

storage peripherals.

Currently a server that requires redundant network access

needs to be configured with two storage network connection

adapters (HBAs) and two additional adapters for the local area

network data, disregarding other connections to the equipment

management interfaces.

In this new consolidation scenario, enabled by FCoE, all LAN and

SAN traffic is routed through a new adapter dubbed Converged

Network Adapter (CNA). This way, advantages are obtained, such

as reduced number of adapters on each server, reduced total

consumption of electric energy, less physical space required

by the server, and reduction in the amount of network switches

and cabling needed. This new adapter includes the Ethernet

protocol that has been redesigned to encapsulate and transport

the FC protocol traffic, making it available for immediate use by

the current data storage equipment.

The traffic overhead, required to encapsulate the protocol within

another, hovers around 2% of the total. Therefore, one may consider

that the overall performance, when comparing FC with FCoE, is

virtually the same. Although the current cost of a CNA adapter

still exceeds the HBA adapter, this difference is diminishing over

time, due to an increase in sales and usage of CNA adapters,

especially in new implementations.

Perhaps, a non-technical IT professional might still question

whether the migration to this new technology would be too time-

consuming and difficult. In fact, in addition to the replacement of

the technology itself, it will require considerable time and effort

to train professionals to acquire knowledge of both networks

(LAN and SAN). Nevertheless, the return on this investment

must be quick and rewarding, since the consolidation of these

networks will make it possible to meet more optimized security

requirements, and improve the performance, scalability, and

availability of business applications.

For further information: http://www.redbooks.ibm.com/redpapers/pdfs/redp4493.pdf

Page 47: Transformation and Change

47

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

POWER, A LOT OF PROCESSING POWERFernando de Moraes Sprocati

Since the introduction of the personal computer, what it is used

for has grown and changed exponentially. From the first personal

computer that provide basic word and data, to multitask and

multimedia systems, features and processing power have been

increasingly incorporated.

A huge step forward in processing power was the use of specific

processors to handle video. Video is a very demanding task

because the games are looking increasingly more realistic.

Known as GPU (Graphical Processor Unit), the graphic processors

had immense numeric processing capability. Currently a GPU

can have many hundreds of cores, while the CPUs (even the most

modern ones) present at most 16 cores that can be “duplicated”

with the hyper threading mechanism. Despite GPUs having

simpler cores than today’s CPUs, the GPUs provided a far superior

performance in numeric processing.

It was opportunity to take advantage of this potential that led

to the development of OpenCL (Open Computing Language),

aiming to make it possible to run common programs in graphic

cards, the same cards used to run games. Created by Apple

and subsequently defined by a consortium of large companies

such as AMD, IBM, Intel and NVIDIA, the OpenCL is gaining

increasing market acceptance.

To take advantage of the benefits offered by the capabilities

of the graphic cards, it is necessary to rewrite applications to

use the parallelism mechanism through which a program has

its multiple streams spread among the processing cores. This

effort is rewarded by performance gains that can be up to 100

times improvement. One of the manufacturers published cases

of high performance gains, above 2,500 times.

The applications that benefit the most from this new approach

are those involving heavy numerical calculations such as those

in the oil industry, finance, fluid dynamics, signal processing,

seismological calculation, simulation, etc.

Any application can be executed on GPUs. Even database

management systems have already been ported to these

processors, thus obtaining great performance results. In fact,

applications with compatible GPU detection that automatically

activates the application use already exist.

Using GPUs, it is possible to double the applications performance

without investing in expensive hardwarebecause for a performance

gain of that magnitude it is not necessary to use powerful GPUs.

Regardless of that, GPUs manufacturers keep increasing

processing capacity of their products. In fact, today, it is possible

to put together a desktop with the power of 2 TeraFLOPS using

GPUs at an acceptable cost for home users. As a reference,

one of today’s most advanced CPUs (Intel Core i7 980 x EE)

hits “only” 100 GigaFLOPS, i.e. an average of twenty times

lower performance.

However, there are still bottlenecks in this technology, especially

in regard to the capacity of data transfer between the CPU

main memory and the memory of GPUs. This topic is being

addressed by manufacturers, which raises the potential of GPUs

for general purpose.

The increase of applications that use OpenCL can lead us to

a new level of performance by simply, more intelligently taking

advantage of the processing capability we have installed today

on our computers.

For further information: http://www.khronos.org/opencl/

http://www.alphaworks.ibm.com/tech/opencl/

Page 48: Transformation and Change

48

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE POWER OF SOCIAL TECHNOLOGYMarcel Benayon

Everybody nowadays is born connected.

My age did not allow this, but I remember

when I joined the team.

It was in 1992, I was 12 years old, and

I got an analog modem from my father –

certainly many readers will not even know

this technology. He worked at IBM and was

a very dedicated person, always getting

home late and always postponing the

modem’s installation in our old computer.

And that went on and on, until one day, with

no know-how at all, but with some luck, I

got the tools, plugged in the device and

the sound of success echoed (the modem connection beep)!

Some days later I was the proud owner of my own BBS (Bulletin

Board System,) a Message and File Exchange Central. It was my

first experience with connectivity, linking society to technology.

Since the user community was basically young, the budget

was low and the lines were cut off and the service suspended

some years later.

Fifteen years later a tech question from a friend surprised me.

He had heard about Twitter, and he seriously doubted if it would

work. He thought that since there was no money involved directly,

there would be no success. It reminded me of my BBS with its

differences and links to a new reality.

There is no doubt today that social networking is a big milestone

in technology. Now I’m used to the Facebook calendar, LinkedIn

contacts and information from Twitter. More importantly, companies

are making money with this, reducing the distance from their

clients to just a click of the mouse. And the great propaganda

approach today is already “click-to-click”, an upgrade from the

old “mouth-to-mouth” approach.

Avoiding spam, I’m going to bring up some different and not very

well known examples of social network applications. Jones Soda

Co. sold corn soft drinks and salmon butter

(among 64 other flavors,) getting famous,

launching a campaign on Facebook and

selling over a million bottles customized

with fans’ photos sent through the internet.

The leaders of the Service Day project, one

of the IBM Centenary initiatives where each

employee donated eight hours of their time

towards community activities, were trained to

explore social media and conduct activities

there, especially recruiting volunteers and

announcing results. As an example of the

impact of virtual actions during the event,

the IBM Rio de Janeiro community on Facebook has achieved

over 500 members, and now it is a strong interaction channel.

If before it was difficult to understand the capital flowing with the

bits and bytes, social use of technology today is a big hit in stock

exchanges, a great channel to find oh-so needed resources.

To fund research and development, LinkedIn has raised about

$350 million (more than its $243 million 2010 revenue and 23

times its 2010 profit).

Success in the audience capture of the virtual radio station

Pandora generated concerns, once the site reported losses in

the previous year, and its business model was still questionable.

Even so, investors expected the injection of capital to give the

company a new direction. Facebook waited its turn in the queue,

with initial estimates pointing to a market capitalization of $10

billion, valuing the company at $100 billion at the time. Will the

market support this operation or will we face the creation of the

“.net bubble 2.0”? In any event, I still keep my old 5 1/4” diskettes

with my BBS files stored... Who knows?

For further information: http://www.bspcn.com/2011/03/04/20-examples-of-greatfacebook-pages/

Page 49: Transformation and Change

49

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

GIRLS AND TECHNOLOGYCíntia Bracelos

When I was a kid I would love to go to the marketplace with my

father. I had my shopping cart and it was fun to do math before

the owners of the stalls. They did not believe my result was correct.

And, modesty aside, I always was. My father taught me this “trick”,

as well as thinking logically and rationally, to like mathematics in

a fun way. Today I have two daughters and my greatest concern

is to leave their options open so that they can learn whatever they

really like, whether it’s math, science or fine arts.

Unfortunately not all girls have the opportunity to develop and

appreciate the exact sciences. I believe the problem starts early

in their lives and cultural traditions are a strong influence. Girls

are constantly made to think they are

not good at math and that technology

is boring. Many say that girls are better

in social areas and in professions that

involve people and it is boys who are

good with numbers. In addition to this

motivation that occurs naturally, they

use several examples of men who are

engineers or computer professionals,

whom they admire and believe have

a great career. Girls don’t have many

examples to inspire them.

I graduated in electrical engineering and

have worked with technology for 19 years.

My daughters (7 and 9 years old) always ask me what I do at work.

I’ve been improving my response over the years. It is easier for

them to understand what a teacher, dentist or doctor does as it’s

part of everyday life for my daughters. I started by explaining, that

an engineer invents, builds, fixes things, and solves problems.

Almost everything around us has something that an engineer

has done. After that I added an explanation about technology.

They were born in this world of cell phones, tablets, and netbooks

and are in love with these gadgets. I explained that in my work

I recommend or apply technologies for businesses and the

communities in which we live, so they can work better. I work

on projects in which the technology tends to make everything

simpler. Engineers and other technology professionals create

new things that help society. It is a cooler way to introduce the

technology area for the girls, without linking it to the specific

stereotype of nerd.

Working with technology involves creativity, problem solving, the

ability to work in a group, and curiosity. Enjoying studying is crucial

to staying current and being in demand in the labor market. Going

to college is very important and having a college degree makes

a difference as does having a professional certification. This

career can ensure, in addition to a good job, the chance to meet

and mingle with very talented people. Technology professionals

are modern and are always in the news. It has everything to do

with girls, cool and modern.

Today there still isn’t a easy path for women who decide to enter the

areas of engineering and technology. Maybe that’s why there are

so few in technical careers in universities

and enterprises. In my work, for example,

there are countless meetings in which I

participate in that I’m the only woman

“techie” in the room. But if there is an initial

bias, this is easily overcome by showing

competence and knowledge. To change

this general framework we must work with

girls from an early age, showing them

things clearly. Parents and teachers are

fundamental to discover and encourage

talented girls in science to follow this real

vocation. And what the job market needs

today is a diverse workforce, because

when men and women work together, they

are able to reach even better results. The job market is lacking in

good engineers and technology professionals. There is a great

opportunity for women to develop and grow in this promising area.

Today I won’t go to the marketplace anymore, because my husband

(who is an economist) buys the groceries better than me. But I

install the electronic equipment. I’m the house technical support

and study mathematics with my daughters. Children learn by

example and this is a sweet way to tell them that they may be

good at anything they like and engaging. And I still love doing

math. Whenever the check arrives at the restaurant my friends

ask me to see how much each one has to pay.

For further information: http://anitaborg.org

http://women.acm.org

Page 50: Transformation and Change

50

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

ABOUT PROPHETS AND CRYSTAL BALLSAvi Alkalay

Some people say that the ancient prophets were ordinary people

who uttered simple logical consequences based on deeper

observation of facts of one’s present and past. Everything we

see around us is a result of some action; it has a history and a

reason of being. On the other hand, following the same scientific

reasoning, if something apparently “does not have an explanation”,

it is because the historical facts that have caused it have not

been investigated deeply enough.

Today, twenty years after the Internet changed society and business,

the world is highly computerized. In practice, this means that

thousands of computers constantly generate huge volumes of

data , e.g. the item that passed through the supermarket box,

the license plate captured by a traffic camera, the visited social

network profile, or the record of a phone call. After being used

in its original purpose, the information becomes outdated.

Historical data then takes on an even greater value When

aggregated in large quantities or arranged in graphics, it may

show performance, growth, fall, and mainly, trends that are

materializing in the business world of the eternal search for the

prediction of the future.

Modern “Prophets” work more or less like this:

1. Identify various repositories of historical data to be spread over a company (or even beyond) and integrate in order to allow them to be accessed together. Two examples of data would be (a) all products sold in a store and (b) register of customers with more generic data like SSN, address and monthly income. Often the data is stored in data warehouses or data marts and other times discarded after analysis;

2. Find and model relationships between these data sets. For example, the SSN customer who purchased such products and the profile of SSN in the General Register of customers;

3. Create graphical views that help them to infer and, eventually, “predict the future” and make better decisions in order to control it. Note that this factor, still fairly human dependent, is the most valuable in this process.

In this example, one might try to predict the standard purchase

of the residents of a certain neighborhood or of certain income

range or with a certain number of dependents, based on the

history of a population. This analysis would be useful to determine

the products and quantity of goods to supply to a specific store

and or to improve the returns from targeted marketing campaigns.

Another important forecast is how much one will need to open

the floodgates of a hydroelectric power plant in order to generate

enough power to meet the demand after the last broadcast of

a popular show, the – time at which entire cities shall bathe or

start ironing. This sounds obvious but it is a historic event that,

when left untreated, can cause blackout in an entire State. This

example is real and shows the intrinsic relationship between

disparate facts which do not suggest anything intuitive when

viewed in isolation.

Predict or control the future has been institutionalized as a formal

science in the disciplines of:

Business Intelligence: which aims to observe quantitative indicators

in order to understand the past and the present

Business Analytics: which seeks to assist us to ask the right

questions via correlation between data. The systems and methods

of these disciplines enhance the practitioners’ multi-disciplinary

knowledge (e.g. between design of dams and plots of novels?)

and intuition to predict the future.

The last word in prophecies is systems that receive data and

facts as they occur can make real time decisions of fit and

performance improvement, e.g. give or withdraw financial credit,

command operations on the stock exchange or distribute load

in a telephone network.

The ability to predict or control the future will always be a difficult

and therefore highly valued task. Systems and business analysis

techniques are modern crystal balls that turn that art into something

tangible and scientific.

For further information: http://en.wikipedia.org/wiki/Data_mining

http://theregister.co.uk/2006/08/15/beer_diapers/

Page 51: Transformation and Change

51

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SMART CITIES: THE WORK MOVES SO THAT LIFE GOES ONFlávio Marim

One day, a certain sequence of three images jumped from a poster

on the wall of the planning office of the city of Münster, Germany,

to win the web in the form of speech in favor of the reduction of

cars on the streets. The photos show a comparison between the

space occupied by the same number of people when they use

bicycles, cars and buses. In 2001, when the image was created,

the web, still young, did not inspire other better ideas of collective

transportation and clean vehicles. Today, with mature solutions in

remote work and the chaos making us cry out for smarter cities,

working without leaving home shows what seemed to be lost:

the cities can still belong to people.

Any inhabitant of a large

center knows the value

of avoiding peak times.

Nobody enjoys being

part of the real army that

moves daily spending a

lot of time and patience

as they emit tons of

poisonous gases in the

atmosphere.

If public transportation

does not meet the de-mand of the city and non-motorized vehicles

are too fragile in the race for space, remote working can be

the alternative solution which can help reduce haste, mental

exhaustion and pollution of the urban streets.

Studies show that Brazilian workers spend an hour and a half on

average per day moving between their homes and workplaces,

half of whom using automobiles and motorcycles. This means

tons of CO2 would not be produced if more Brazilians would

stay in their homes, and that would be just the tip of the iceberg.

This will translate to the people having more time available to

themselves translating to better and enriched life.

In the current situation, you observed a huge downfall in the

basic protocols of urban conviviality. The commuters disregard

any respect to the crosswalk, avoid making room for cyclists

and expecting them to be patient at all the time are challenges

almost unattainable for those who already spend their daily stock

of tolerance much before they even manage to get close to their

place of work.

To add to the existing problems, technology and connectivity are

beginning to be used in a misguided way such as use of cell

phones, smartphones, tablets and even laptops, dangerously

dividing attention behind the wheel and also increasingly arousing

the watchful eyes of criminals.

The connectivity we already have, if used with discipline, offers

us a new way to be productive and focus on the greater good.

Companies like IBM, Xerox and American Airlines, for example,

realized years ago that most of its employees can productive

at home-office the same or even more than at conventional

structures. The taboo of

lack of productivity out of

sight of the management

has proved to be just

that: a taboo. In many

cases the adaptation to

remote work is not easy

as sometimes common

family conflicts arise and

often the professional can

not guarantee the proper

environment out-side the

company. This indicates that it may be time to apply at home

the ability to adapt to a new work environment. Live connected

can not mean an increase in tension. Rather, they should allow it

to produce more calmly, giving impetus to the cities and letting

them breathe without the weight of our back-and-forth routine,

now unnecessary.

People have in their hands a great chance to break a chain

reaction that has transformed conviviality into dispute. Using

the technologies of remote working to promote this situation

will create a great opportunity for the emergence of true smart

cities: urban centers are less polluted, less congested, with better

quality of life and populated by smart attitudes.

For further information:http://super.abril.com.br/cotidiano/se-todo-mundo-trabalhasse-casa-667585.shtml

Page 52: Transformation and Change

52

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SPECIAL TECHNOLOGY FOR SOCIAL INCLUSION

Ilda Yaguinuma

The United Nations Organization

(UNO) estimates the number of people

with special needs in the world at

600 million. In 1998, December 3rd

was chosen as the date to celebrate

the International Day of the Disabled.

This date was especially honored

in 2006 as the “eAccessibility”

Day, i.e. accessibility to information

technologies.

The Brazilian Institute of Geography and Statistics (IBGE) in its

2000 census estimated the number of disabled at 14.5%. Disabled

people are those with some difficulty to see, hear, move or having

any other physical or mental needs.

Technology, in physical mobility as much as in intellectual capability,

is growing towards integrating people with special needs into

various segments of the productive market.

In the area of visual sensing, developers search for alternatives to

adapt applications for people with disabilities. There are several

examples: applications that read the open pages on the screen

transmitting the information in audio; The Snail Braille Reader

converts to audio text messages in Braille; reading through vibration

offered by Nokia Braille Reader; cell phones that can make calls

activated by movements; a mobile application that recognizes

objects by bring them close to the device; a voice recording device

that makes pre-programmed calls and a bracelet that guides

the visually disabled using a GPS and Bluetooth connection.

Concerning hearing handicaps, LIBRAS (Brazilian Sign Language)

was identified as the most used language for communication

in Brazil. As well as the various existing languages, it consists

of linguistic levels such as: phonology, morphology, syntax and

semantics. The same way that there are words in oral-aural

languages, in the sign languages there are also lexical items,

called signs. The only difference is its visio-spacial mode.

In terms of technological advance for LIBRAS, there is software

which translates words in Portuguese, captures the voice through

a microphone and displays the interpretation on a monitor, in

sign and animation form in real time. This software offers a chat

interface with presentation in the written Portuguese language,

as well as in sign language. The software also translates text into

LIBRA sign language.

Today, there are sites to help with finding placement for people

with special needs in the labor market. The big IT companies

participate in these sites in order to be in compliance with the

“Lei de Cotas” (article 93 of the Federal Law 8.213/91), which

requires that 2-5% of the headcount of the companies must be

offered to the disabled.

Several companies in Brazil cooperate with organizations that

operate in the area, such as Avape, IOS, Impacta and Instituto

Eldorado, through educational and recruiting activities and through

incentive programs, believing in the development of the diversity

of the work force for the future.

Studies show that promoting this diversity brings benefits to

the companies. People with different backgrounds provide a

holistic view and they promote creativity and innovation. What

it is necessary to evaluate permanently is the inclusion itself:

the recruiting source, the selection and training methods,

and the sensibility and integration of the disabled in the

professional community.

Technology can open doors and break down barriers for people

with special needs, integrating them into society and making them

part of the productive chain, with the speed and the dynamics

required by the market.

For further information: http://www.deficienteonline.com.br

http://www.oficinadofuturopcd.com.br

http://betalabs.nokia.com/apps/nokia-braille-reader

Page 53: Transformation and Change

53

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

AGILE: ARE YOU READY?Luiz Esmiralha

In the beginning there was chaos. This could be the opening

sentence of a book to tell the story of the information technology

industry. At the beginning, systems development was a small-

scale, risky, non-standard, and expensive activity. A long period

of tests and fixes at the end of the project could indicate that

the quality of the final system was lower than one would expect

from a reliable product.

Around the late 1970s, several methodologies were created

directly derived from engineering, describing a project life cycle

as sequential phases, today known as waterfall. This method

sets finish-to-start phases where in order to start a phase it is

necessary that the prior is terminated and that each phase is

directly linked to a specific set of activities, resembling a factory

production line. Although some teams have obtained success

with the usage of such methodologies, about 24% of all those

projects are cancelled or discarded after deployment, as described

in the Chaos Report (2009), published by the Standish Group.

The idea of a software factory resembles predictability and

reduction of costs and risks. However, software has several intrinsic

features that make its development essentially different from serial

production established by the traditional model of Henry Ford.

A factory produces the same type of product, repeatedly, thus

reducing the unit cost of production. Most of the repetitive activities

can be automated but developing software is an intellectual effort

closer to the design of new products.

The mutability is another essential feature of software. Unlike

buildings, cars, and other objects in the material world, a software

system is relatively easy to be modified and adapted to new

situations. Generally, the corporate systems have long life cycle.

Therefore it is vital that this system is well harnessed, allowing it

to constantly readapt according to the evolution of the business.

Agile methodologies are a response to the need for controlled

and reliable processes, but more aligned to the peculiar nature

of the software. Instead of thorough planning with strict controls

of changes of other methodologies, Agile sees the change as an

opportunity. Although there are different flavors of Agile (Extreme

Programming, Scrum, Crystal, FDD, Agile UP, among others), the

Agile Manifesto summarizes the values and common principles

to all of them. Agile emphasizes that the collaboration with the

customer is a critical success factor and that progress is measured

through the deployment of useful software which is better off

adapting to requirements than strictly following a plan.

Techniques can be used to allow controlled adaptability. They

include the adoption of a partitioned lifecycle into fixed iterations

of one up to four weeks duration, smaller and more decision

empowered teams, negotiable scope contracts, customer

engagement throughout the design, development, guided by

tests, and massive usage of unit tests.

Agile teams are self-manageable, i.e. they receive goals and

decide how to meet requirements within the constraints of the

company. Several techniques can be used to keep track of project

progress. One example is daily stand-up meetings of fifteen

minutes, where participants stand up and report the status of their

work and any difficulties they are facing. Another example is the

usage of kanban boards and burn-down charts to communicate

project status to all participants.

While Agile is not a solution for all kinds of projects, its principles

and practices can be a powerful tool for system development

project managers, as they are not trapped in traditional models.

Moreover, it provides technical teams an agile and effective

methodology for system development.

For further information: http://www.agilealliance.org/

http://agilemanifesto.org/

http://en.wikipedia.org/wiki/Agile_management

Page 54: Transformation and Change

54

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE THEORY OF MULTIPLE INTELLIGENCES AND JOBS IN ITElton Grottoli de Lima

At the beginning of the

1980’s, Howard Gardner, an

eminent professor at Harvard

University, proposed an exten-

sion of the traditional concept

of intelligence that would

completely redefine the aca-

demic perception of human

intelligence.

Traditionally, the cognitive capacity of a person is evaluated by its

logical and mathematical aspects. This is the capacity reflected

on intelligence quotient (IQ) tests, a measure that represents the

ability to deal with patterns, numbers and shapes. Gardner realized

that this way of measuring the cognitive capacity of an individual

was limited, since other aspects as important as the logical and

mathematical were left aside. For example, speech, physical

ability and the written communication skills are not reflected by

the assessment of the traditional model. These observations led

Gardner to conceive his Theory of Multiple Intelligences, proposing

that the learning ability of one person should be evaluated within

a spectrum of basic skills. His research identified seven basic

human skills, each one expressed through a kind of intelligence:

the linguistic, the logical-mathematical, the spatial, the bodily-

kinesthetic, the musical, the interpersonal, and the intrapersonal.

In the technology area, various professions attest to the applicability

of the spectrum of intelligences proposed by Gardner. The

intelligence most commonly associated with this area is the

logical-mathematical intelligence, which gives the individual

the ability to reason logically, to deal with quantities, shapes

and patterns. This form of intelligence is used by programmers

to build algorithms, deal with abstractions and variables, and

also demonstrated by consulting business professionals when

they recognize patterns and apply the systemic thinking aimed

at solving business problems.

The professionals who specialize in software development for

electronic games can also demonstrate two other skills tightly

related to their activities. The first is spatial intelligence, related to

the ability to perceive the visual world accurately, make changes

and transformations on initial perceptions and recreate aspects

of visual experience. (This intelligence is especially applied in

the use of simulators and computational models that virtually

recreate the physical world.) The second is musical intelligence,

recognized as the earliest talent manifested in human development,

through the ability to perceive and manipulate tones, timbres,

rhythms and musical themes.

Another aspect relevant to software development is the fact that

the written language is the most common form of interaction

between systems and their users. As a result, system and

interface architects find linguistics intelligence indespensible.

This intelligence relates to the individual’s ability to cope with the

written and spoken language. As the interaction between users

and systems becomes more sophisticated, interfaces operated

by gestures and body movements are becoming more popular.

The creation of appropriate hardware and software to this new

paradigm requires developers and architects to understand

motor skills, leveraging the manifestation of bodily-kinesthetic

intelligence, characterized by the domain of body movements

and objects manipulation.

Besides technical occupations, there are also sales related

professions, which have various levels of relationships with

customers, whose success depends largely on dealing with

people and well-managed relationships. These are inherent

characteristics of interpersonal intelligence, demonstrated by the

ability to maintain good relations with others through understanding

their moods, motivations, and desires.

Last but not least, there is the intrapersonal intelligence, which gives

the individual knowledge of himself, recognizing his aspirations,

ideas, and feelings. It is demonstrated by high motivation and

a confident attitude. This characteristic is often associated with

professional success and is mainly expressed in great leaders.

This group of elementary intelligences has evolved and expanded

since its conception, by both Gardner and other scholars, without

losing its position as a basic set of human skills. Understanding

how different intelligences manifest in IT professions allows us

to expand our view of these professionals in addition to their

traditional stereotypes.

For further information: http://revistaescola.abril.com.br/historia/pratica-pedagogica/cientista-inteligencias-multiplas-423312.shtml

http://www.youtube.com/watch?v=l2QtSbP4FRg

Page 55: Transformation and Change

55

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

ANALYTICS AT YOUR FINGERTIPS Paulo Henrique S. Teixeira

In general, the term “Business Intelligence”, or simply BI, is

commonly associated with an infrastructure which is capable

of processing and generating reports from business information,

which in turn is collected from different sources and consolidated

into a large database.

The BI concept is not new. In 1958, the researcher Hans Peter Luhn

defined “Business Intelligence Systems” in an article published in

the IBM Journal of Research and Development as an automated

system used to disseminate information into the different sectors

from any industrial, scientific or governmental organization.

In the current highly competitive environment, the efficient use

of information collected from various sources and stored into

BI systems has become a key differentiator l or even a matter

of survival for organizations. This has led to the evolution of the

concept of Business Analytics.

In order to make business decisions with greater speed and

precision information needs to be available at any time. Besides,

such decisions are not restricted to the physical workplace anymore,

and can be made in many different places and With increased

mobility of the workforce and flexible work options, these decisions

may also be made in the clients office, airports, in the streets or

ones place of residence.

The emergence of high speed network connectivity has enabled an

increase to the access of analytics environments, supplying part of

the requirements. However, it was the emergence of smartphones

and tablets which fully opened up the mobility opportunities for users,

fueling the start of mobile analytics. According to the Gartner’s

estimate, mobile devices accounted for 33% of the Business

Intelligent and Business Analytics systems accesses in 2013.

The biggest candidates to enjoy the benefits of this will be

Executives, managers, sales force and even field support to

the customer or the clients. They will have access to the following

types of functionality:

• Access to business information, anytime, anywhere, to

support decision-making;

• The use of multitouch screens, which allows new forms of

interactions for the end user. The use of specific touches

into the screen allow the addition of new query functionalities

for reports, with less training need for the users;

• Generating real-time alerts on mobile devices, such as a

stock level below the minimum limit, allow actions and faster

decisions reducing impact to the production lines;

• Geolocation becomes easier through the triangulation of

mobile phone antennas, GPS and Wi-Fi networks allowing

a seller to generate specific reports for the location, as, for

example, the consumption profile of the population of the

region in which he finds himself. A call center can use this

functionality to determine which field technician is closest

to a customer and accelerate customer service.

It is also possible that mobile devices act as a channel to feed

the analytics system with new information. For example, a text or

question can be written, sent and compared to other information

databases (text and audio mining).

Mobile analytics is still recent and follows the trend of the world

in which people are permanently connected. Its implementation

has disruptive capacity in organizations and processes must be

very well planned, so that the agility and the expected business

benefits are achieved.

For further information: http://www.ibm.com/software/analytics/rte/an/mobile-apps/

http://www.gartner.com/it/page.jsp?id=1513714

Page 56: Transformation and Change

56

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE RCA PROCESS IMPORTANCEGustavo Cezar de Medeiros Paiva

It is crucial, in the digital age, for a company to avoid system

outages, which result in productivity falls, revenue losses and

loss to the company’s reputation. Given this, it is essential to

investigate problems affecting the company’s businesses. The

Root Cause Analysis (RCA) process aims to identify, correct and

prevent the recurrence of these problems.

The RCA process, covered in the Problem Management section

of the Information Technology Infrastructure Library (ITIL), is

considered to be reactive and proactive at the same time; reactive

because the problem will be investigated after its occurrence, and

proactive due to the investigation outcome, where it is expected

to contemplate a solution, so the problem does not happen again.

The problem investigation requires the participation of different

teams and disciplines, according to the problem category. It is

led by the problem management team or, if required, by a team

designated for that function. An RCA report is generated through

this collaborative work. It includes, among other information,

the services that were impacted, problem description, events

chronology, evidence, actions taken to restore the service and,

especially, the action plan to fix the problem.

There are several techniques for the RCA method application,

and the most used are the “five whys” and the Ishikawa diagram

techniques, also known as “fishbone”. The first consists in

questioning why that problem occurred until all possibilities

have been exhausted. The second technique is based on the

idea that the effect, in this case the problem, can have several

causes, which are mapped graphically in a diagram similar to

a fishbone, so that they can be better investigated.

When working on an RCA process it is essential that the necessary

resources be available. Such resources are called diagnostic

documents and are composed of some elements, like, for example,

files generated by systems that have information related to its

operation.

With the advent and spread of cloud computing in companies,

the challenge is to integrate environmental monitoring tools so

that the collection of information is successful. The idea is to

have a correlation of those data in order to determine, through the

diagnostic documents, the relationships among the diversions

in service applications and infrastructure failures.

Both cloud computing service providers and clients must make

efforts to integrate the incident and problem management tools

so that there is transparency in this process thus facilitating the

investigative work.

Regardless of the infrastructure type, the RCA process provides

an improvement to the availability and management of IT services,

thereby increasing the customers’ satisfaction and reducing

operating costs.

For further information: Book: ITIL Service Operation, by Great Britain: Cabinet Office - ISBN 9780113313075 - 2011

Page 57: Transformation and Change

57

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

CAN I SEE THE DATA?Ana Beatriz Parra

If you had ever in your life been to a presentation by Hans Rosling,

you would probably have fallen in love with data visualization.

The lively presentation about the economic development made

by Rosling on TED in 2006, was seen by thousands of people

and is an example of how the visual representation of data can

reveal information that allows us to acquire a better understanding

of the world.

Vision is one of our keenest senses. Our visual system is very

good at perceiving position, extent, orientation, shape and size.

Through vision we can quickly realize standards and anomalies,

such as differences between sizes, formats, guidance and

placement of objects.

The visual representations of data can be classified in different

ways. The first distinction we can make is related to its construction,

either manual or through algorithms. In the first category, we

have infographics, which are representations of a given domain,

manually drawn and that, in general, cannot be replicated easily

to another set of data. Infographics are visually appealing and

currently widely used in newspapers and magazines to present

various data, such as the level of indebtedness of European

countries or the comparison between the several types of milk

available in the market.

In the second category we have the representations generated

by computational algorithms that can be reused for new datasets.

This category is called Data Visualization (DataVis) or Information

Visualization (InfoVis). The same visual representation can be

used repeatedly over time with updated data sets.

The New York Times is one of the media outlets that best utilizes

data visualization to enrich and facilitate the understanding of

their subjects, using both infographics and InfoVis.

Another type of classification we can use is in relation to the purpose

of viewing: exploitation or explanation of the data. Exploitation

is used when we don’t know the data and seek to understand

and identify important information it can provide. In explanation

the aim is to communicate a concept previously understood. In

this case, the visual is used to emphasize interesting aspects of

data and convey some information already known by the author

(probably acquired through previous exploitation).

Increasingly these two categories are merging for developing

interactive visualizations, in which the author presents an initial

explanation of the information and provides users with ways

to exploit the data, for example, changing the period under

examination, or by selecting a subset of the data.

The visual representation requires knowledge of a range of

disciplines such as programming for data collection and treatment,

mathematics and statistics for exploration and understanding of

the information, design for visual representation and, especially,

knowledge of the domain to which the data under review belongs.

Data visualization is an extremely rich resource to analyze and

represent information, but like everything else in life there are two

sides. Visualization used incorrectly can hinder understanding

or even lead to erroneous conclusions. To represent a piece of

information it is necessary to know the data very well, set the

question you want to answer or the message you want to convey,

identify your users’ profile and select representation techniques

adequate to your goal.

For further information: http://www.ted.com/talks/lang/en/hans_rosling_shows_the_best_stats_you_ve_ever_seen.html

http://learning.blogs.nytimes.com/tag/infographics/text

Page 58: Transformation and Change

58

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

LEARN WHILE PLAYINGSergio Varga

Is anything more interesting than learning while having fun? Jean

Piaget (1896-1980), one of great thinkers of the Twentieth Century,

described in his Cognitive Theory that intellectual development

occurs in four stages, with pranks and games being important

in development activities.

There are several initiatives and pedagogical practices in which

knowledge is taught through play, especially during childhood.

More recently computer games

have been introduced with the

aim of teaching concepts and

their application using activities. In

addition, more complex questions

possibly requiring a different way

of thinking, were solved using

computer games. More recently

a problem about AIDS that was

already three years in research

using traditional means, was solved

in just three weeks when played in

a Foldit environment.

Several solutions are emerging in the teaching of electronics

and programming logic. In 2005 a group of students from the

Interaction Design Institute Ivrea (IDII), Italy, developed a low

cost microcontroller board based on the Wiring project — Open

Source, in which anyone can develop smart devices with minimal

knowledge of electronics and logic programming. This board and

other similar boards have become an excellent learning support

tool in the academic world as well as for fans of technology.

But, what does this simple board do? It allows the user, in a very

simple way, to develop various electronic devices, from the basic

switching of the sequence of LEDs, to complete home automation.

This type of board is based on a microprocessor that monitors the

inputs and controls the outputs, both digital and analog, where

several types of instruments can be connected such as sensors,

lights, motors, etc. These devices are connected using wiring

cables and protoboards without soldering or special connections.

On the programming side, it uses a unique language with a

friendly interface, also based on open source. This allows anyone

to create an initial experiment like blinking an LED, in less than

5 minutes of work.

Apart from its use in an academic

environment, this board has the

potential to be used commercially,

mainly in development processes in

which a circuit prototype is required

and expensive. Companies who do

research may also benefit from this

type of device for development and

testing of new products. Or, within

the concept of smart cities, as an

aid in the layers of instrumentation

and interconnection of systems

and devices.

For those who work only with software and have little knowledge

of electronics, this type of equipment opens a new world of

opportunities and innovation.

Also, for younger people that are still discovering a taste for

science and engineering, this board stimulates curiosity, and

develops logical thinking through play, while educating the child

on electrical, electronic, physical and also computing conecpts.

Is this new “toy” the key to awaken children and young people’s

fascination with technology and everything that surrounds it?

For further information: http://makeprojects.com/Topic/Arduino

http://fold.it/portal/info/science

Page 59: Transformation and Change

59

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

AUDIO PROCESSING IN GRAPHICS CARDSDiego Augusto Rodrigues Gomes

Video card contain a Graphic Processing Unit (GPU) which

researchers have been using to solve problems that are targeted

well beyond graphics. The GPUs are extremely efficient in

applications that demand high computing power. Their use in

computationally intensive problems has become even more

popular after the manufacturers of these video cards have begun

providing programming interfaces that address not just graphics

but also general purpose applications.

They have been applied to optimizing solutions to problems in the

areas of bioinformatics, financial, and physical simulations. These

applications had much longer times using only conventional CPUs.

In this context audio programs can benefit from graphics hardware

for more efficient processing because

they need to perform many operations

such as applying effects, simulation and

synthesis of three-dimensional audio or

need shorter response times.

The concept of 3D audio is related to

the ability to simulate the placement of

a sound source in a three-dimensional

virtual space around a listener. This

happens with the help of a process

called binaural synthesis, in which the

left and right channels of an audio signal

are filtered by mathematical functions that allow one to simulate

such positioning. Therefore, in the same way one needs glasses

to try viewing in three dimensions, one must use headphones

to experience three-dimensional hearing with higher fidelity

sound positioning.

We perceive the spatial positioning of a sound source because

the waves traverse different distances and are the right and

left ears at different moments. The brain, upon receiving this

information, allows us to identify the location of the sound

signal. In mathematical terms, the functions that define how

a sound wave reaches the entrance of the ear channel after

reflection on the head, trunk, and outer ear of a listener are

called Head-related Transfer Functions (HRTFs). These functions,

in addition to their applicability in the field of entertainment,

are also useful in helping the hearing impaired. There are

studies using HRTFs for simulating the positioning of a sound

source and transmit that signal to the hearing aid for people

with disabilities.

Some research centers such as MIT and Ircam have bases of

HRTFs to represent some positions around the listener. The

determination of these functions requires a considerable amount

of resources and for this reason it is not made for all positions

around a central point of reference. In order to obtain the values of

the functions of known points, we use interpolation mechanisms

able to calculate them from existing ones.

The gain in performance for 3D audio

applications using GPU is interesting

because it allows the construction of

interactive applications to simulate and

respond more effi-ciently to changes in

positioning. This technology, in addition

to being used to transmit stimuli that

provoke new sensations to the spectators

in the field of entertainment such as

movies, music and games, can be

used in room acoustics simulation and

probably in other fields not yet explored.

In addition, it is more advantageous

that currently surround systems present in movie theaters and

in home theater systems, use five or more channels of audio

instead of only two.

The audio processing with GPUs use will contribute significantly

to the advancement of 3D systems, enabling the construction

of increasingly realistic virtual environments and enable the

development of devices that bring benefits to human life.

For further information: NVIDIA CUDA C Programming Guide, version 4.0

http://sound.media.mit.edu/resources/KEMAR.html

http://www.ircam.fr/

http://www.princeton.edu/3D3A/

Hearing Aid System with 3D Sound Localization, IEEE

Page 60: Transformation and Change

60

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

UNICODE ♥ דוקינו ☻ Уникод ♫ دوكينويAvi Alkalay

Did you know that not long ago it was impossible to mix several languages in one sentence without the help of a special multilingual editor? And that some languages contained letters that didn’t even have a digital representation, making it impossible to use them in computers? All of this is in the past now with the advent of Unicode. To understand it, let’s review some concepts:

Character: it’s the digital representation of what we call letter or grapheme or ideogram. Some examples of characters: J (uppercase), ç (cedilha lowercase), Greek characters Φ ζ λ Ψ Ω π, Financial like $ ¢ £ ¥ ₪ €, math characters × ÷ ∞ ∂ ∑ ∫, Egyptian Hieroglyphics or (“Unicode” in Hebrew) וניקוד And many others we’ll show in this text;

Glyph: a graphic representation for a certain character. The fonts times, new roman and arial use different glyphs to represent the character “g”;

Encoding: it’s a tip that we give to the computer so it knows which character or human letter it should use to show a certain binary code. For example, the 224 code in the ISO-88859-1 encoding is the character “à”, but in the ISO-8859-8 it’s the character “א”. Notice that in the universe of these old encodings the letters “à” and “א” cannot coexist because they use the same binary code (this is exactly the problem raised in the beginning of this article).

Before Unicode, only 1 computer byte was used to store the information of 1 character. This encoding had undesirable limitations. Unicode proposes a much larger range of unique and immutable binary codes per ideogram, allowing characters of different languages to coexist in the same text. In this example “à” and

.have Unicode codes that do not conflict: 0x00ED and 0x05D0 ”א“

Unicode history started in 1987 at Xerox and Apple as an attempt to incorporate all ideograms and letters in the world. This is obviously a set much bigger than the 255 characters that fit in 1 byte. A Unicode character can have from 1 to 4 bytes.

Evolving to multiple bytes per character was nontrivial since most software was not prepared for this. Counting characters in one sentence is now different than counting the number of bytes used by this sentence. To display or print that sentence is also a different task: there are languages written from right to left, such as Arabic and Hebrew, versus the ones written from left to right based in the Latin system. In the article title there is

the word “Unicode” written in both ways in the same sentence: in the Latin script (→), Hebrew (←), Russian (→) and Arabic (←) respectively. This serves as an example to show that the question of multiple meanings of writing in the same sentence is covered and resolved by Unicode.

Unicode also introduced a performance challenge as there are a lot more upper-case and lowercase characters to compare and more bytes to store and process. But all of this is a small price to pay compared with the evolution of the compu-ting power, universality, and information eternity that Unicode offers.

Another aspect of the article title that you will notice are symbols like ♪♠☼☺. These are ideograms that are part of a range of Unicode characters called emoji, incorporated into the standard in 2010. For now, some emojis must be represented as text because they are still being implemented in some operating systems. On the other hand they’re already very popular in iOS (iPhone, iPad), Mac OS X Lion, and Linux. On Microsoft systems, only Windows 8 will have complete support to emoji.

Emoji is a landmark evolution of our written language, used intensively social medias and SMS. It’s a lot more fun and expressive to write “I ♥ you”, “I’m hungry, let’s go?, “�Loved --”,

“Today I’m zenB”, etc. How about these characters for your next tweet? ♐ ☠ ☢ ☭ ☣ ✡ † ➡ ☮ ☎ ♚ ♛ ✿. They are all characters just as common as “ú” or “H”. Thanks to Unicode, no additional resource is required for your word processor to use them.

Unicode is already heavily used on the internet. It’s common to find pages that mix languages or use advanced characters. A Google report shows that between 2008 and 2012 the usage of Unicode in sites has grown from 35% to 60%. Clearly Unicode is an absolutely essential technology for a globalized and multi-cultural world.

Throughout this text I’ve showed some curious characters, letters

and ideo-grams. To close, I leave you with a last idea:

For further information: http://www.DecodeUnicode.org/

http://en.wikipedia.org/wiki/Emoji

http://googleblog.blogspot.com.br/2012/02/unicode-over-60-percent-of-web.html

A little bit to Unicode does not hurt anyone.

Page 61: Transformation and Change

61

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE TRUTH IS A CONTINUOUS PATHPaulo Maia

We’ve all experienced situations in

which we noticed that companies

providing products and services do not

know their customers. Despite having

a significant amount of data about

customers, companies are unable to

use this information effectively. In the case of companies that have

gone through mergers and acquisitions, a common practice in

today’s market, the problem is even greater. In addition, one in

every three managers makes decisions based on information

that they do not trust or do not have, according to IBM’s 2010

study Breaking Away with Business Analytics and Optimization.

Problems like these would not happen if companies treated their

information as a real asset: carefully managed and with high

control over its quality.

On the other hand, the challenge is only increasing. The amount

of data in the world is growing at astonishing rate: approximately

90% of the total volume was created in just the last two years.

This era is the era of what is being called Big Data, which has

four main challenges represented by four “V”s:

• Volume of data. In 2011, approximately 1.8 zettabytes (ZB,

which is equal to 1021 bytes) of data was generated. In 2020,

this is predicted to be 35 ZB. Google generates over 24

petabytes (PB, 1015) per day, Twitter about 7 PB and Facebook

more than 10 PB.

• Velocity in the creation and integration of data with business

processes requiring information practically in real time.

• Variety of data. 80% of existing information is in an unstructured

format such as email, documents, videos, photos, social

networks and data from electronic sensors.

• Veracity. It is necessary to identify what information is reliable

in the midst of a considerable amount originated at a high

rate from a variety of sources.

From this scenario arises the concept of data governance, a

discipline that involves the orchestration of people, processes

and technologies, aimed at establishing control over these assets.

For the successful implementation of this discipline, two main

factors are important: choosing an executive sponsor to support

the activities that usually involve multiple business areas and

assessing both the current level of data governance maturity

and the level to be achieved over a given period.

In this way, the results can be measured, and the support of

the business areas maintained. The program must become an

ongoing process that establishes an initial scope aligned with

the company’s business strategy such as increased revenue

generated from better customer knowledge, cost reduction by

decreasing the costs of data storage or mitigation of risks by

more efficient management of credit risk.

The main disciplines that support the program are data quality,

security, master data management, analytical governance and

information life-cycle management.

Some of the benefits achieved by organizations that implement

data governance are: improved confidence of users in relation to

the reports and the consistency of their results when compared

with others originated from multiple sources of information, and

increased knowledge about the client that enables more effective

marketing campaigns.

It is important to note that the main cause of failures in the

implementation of a governance program is the lack of alignment

between business objectives and IT department programs.

IT should not be responsible for data governance, rather its

protector or caretaker.

For centuries, philosophers such as Nietzsche have sought

an answer to the meaning of truth, but this remains elusive.

In practical terms, the truth could be defined as the information with

the highest quality, availability, relevance, completeness, accuracy

and consistency. Companies that are able to implement data

governance programs, considering the velocity, variety, volume

and veracity of the information generated will have a tremendous

advantage in an increasingly competitive and intelligent market.

For further information: http://www.dama.org

http://www.eiminstitute.org

http://www-01.ibm.com/software/data/sw-library/

Page 62: Transformation and Change

62

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

EVERYTHING (THAT MATTERS) IN TIMERenato Barbieri

Time flies. “Are we already in the

middle of the year?” “It seems like

it was only yesterday!” “I didn’t see

the time go by...”. Philosophers

can keep debating the nature

of time, but in our daily life we

need practical solutions to spend

our time in the most rational and

efficient way.

Time management methods,

techniques, and tools aim to

help us in the identification,

organization, and prioritization of our tasks while avoiding the

postponement of their execution.

Published in Brazil in 2005, David Allen’s book “Getting Things

Done” started the “Getting Things Done” movement, better

known as GTD.

The GTD method is based on very simple concepts and assumes

that everything we need or want to do occupies a valuable space

in our brain. Consequently, we end up wasting time and energy

when we worry about things that we have to do but we don’t want

to do. These sources of concerns are referred to as “stuff,” which

should leave our brain and be stored in some sort of repository,

such as a list on a sheet of paper, in an appointment book, or

even in a GTD software container. The main objective is to take

all the “stuff” out of our heads and save it in a storage repository

for future use.

The next step is to process all this information, i.e., we must

decide whether the task will be executed immediately (if it will

take less than two minutes, do it now!), whether it deserves to be

detailed and structured as a project, whether it will be delegate

to another person, whether you want to postpone its execution

to a distant future, whether it should be saved as a reference, or

whether it should simply be dumped in the trash. Once the tasks

have been processed and organized, you can start to work on

each one of them.

The GTD also recommends a contextual-based task organization

(at home, at work, in the street) in order to facilitate their executions

in proper settings and, consequently, allow us to use our time

wisely. The cycle is then closed with weekly and monthly reviews,

allowing tasks to be periodically evaluated and to have their

priorities adjusted according to their importance and urgency.

However, the use of this method requires changes in habits. A

great reference that complements these concepts very well is the

book “The Seven Habits of Highly Effective People”, by Stephen

R Covey. There is even an implementation of GTD, called Zen-

To-Done (ZTD), which incorporates the concepts described in

Covey’s book.

Another simple and interesting time management technique is

the so-called “Pomodoro Technique”. This technique is widely

disseminated on the Internet and has many supporters within the

Agile community. It employs a concept called timebox, which

proposes the division of tasks into execution periods of 25 minutes

followed by rest periods of 5 minutes. A longer rest of 15 to 20

minutes is recommended after a sequence of 4 pomodoros.

This technique is excellent for exercising the focus on the tasks

and uses only two lists: one for controlling daily activities and

another for keeping pending activities. The technique also

recommends the registration of each interruption we have to

face, since the overall result will show the extent to which our

productivity is affected.

The above techniques are complementary to each other and

provide the necessary resources to help each person to find his/

her own style or solution. Imagine how interesting it would be to

apply one or more of the above technique in your daily activities

and get to the point of telling yourself: “Wow! I managed to do

everything that was a priority for today. And now I have time to

spare. What can I do to take advantage of this time?” The ultimate

goal is to use the time rationally and intelligently in order to do

everything that matters in time.

For further information: http://www.davidco.com/about-gtd

http://zenhabits.net

http://www.pomodorotechnique.com

Page 63: Transformation and Change

63

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

CLOUD COMPUTING AND EMBEDDED SYSTEMSNilton Tetsuo Ideriha

Cloud computing is a business model where scalable and elastic

computational computer resources are provided as a service

to customers in a self-service format and on-demand through

the Internet.

Initially this new model of computing was focused on the use of

a service to replace computer resources such as a server in the

remote datacenter or any application installed on user’s desktop.

There are, however, other types of systems that can make use of

the resources provided in the cloud. These can be characterized

as embedded computer systems which are a set of hardware and

software with the purpose to perform specific functions dedicated

to device or system. Embedded computer systems are present

in automobiles, medical equipment, aircraft and appliances.

They can use cloud services to expand their resources, thus

increasing the range of services available to users.

More and more embedded systems are connected to the Internet

and to corporate networks. This connectivity breaks down an

important barrier because, in the pasttraditionally, their connectivity

was isolated to the device they were in and they could not access

other networks. This availability of new access points enables

the expansion of services offered by these systems. For example,

many models of cars have embedded devices that allow for

an integrated control system, GPS navigation, connectivity with

mobile phones and other electronic capability. Cars with Internet

access can access GPS routes, music, photos and files from a

central repository provided by a cloud storage service, making

it possible for the user to hear their favorite songs and find the

fast way to their favorite places. All this is provided through a

cloud service.

There is also the factory floor machinery that generates a large

amount of data from sensors and control systems. This data can

be sent to a cloud infrastructure, analyzed by big data analytic

solutions, used in applications of managing, monitoring and

data mining. All of this is done to predict failures and provide

maintenance in a timely fashion to prevent down time.

Other examples of applicability are the services in the medical

field. These services are collecting data in Intensive Care Units

(ICUs) and sending the data to a cloud service. This service, in

turn, calculates the values of risk that may be compared with

external standards to measure the performance of ICUs in order

to guide the improvement in areas of poor performance.

Starter Kits are available in the market. These kits consist of a set

of hardware and software for cloud computing projects where

resources are accessed through APIs (Application Programming

Interface) directly embedded in client software-specific. These

kits offer cloud computing services for data storage, firmware

update, and remote access based on Virtual Private Networks

(VPNs) and remote configuration.

This new approach can expand the storage capacity and

processing of embedded systems that was once isolated and

dedicated. This represents a new field to be explored where

businesses will find the promise of greater productivity, integration,

and functionality.

For further information: http://pt.wikipedia.org/wiki/Sistema_embarcado

http://www.eetimes.com/design/embedded/4219526/ The-embedded-cloud--IT-at-the-edge?Ecosystem=embedded

Page 64: Transformation and Change

64

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

NANOTECHNOLOGY – HOW DOES THAT CHANGE OUR LIVES?Amauri Vidal Gonçalves

Wikipedia defines nanotechnology as the study of the manipulation

of matter on an atomic and molecular scale, i.e., structures ranging

from 1 to 100 nanometres (10-9 m). To have a more objective idea

of the dimensions we’re talking about, it would be like comparing

the size of a soccer ball with the moon.

Nanotechnology is usedin the development of products in

several areas such as medicine, biology, chemistry, physics.

Nanotechnology manipulates atoms to build stable structures.

It uses instruments of high specialization as, for example, the

scanning electron microscope or SEM.

The concept of nanotechnology was approached for the first

time in December 1959 when Richard Feynman (1918-1988), a

renowned physicist, commented on the possibility of manipulation

of molecules and atoms, glimpsing the production of components

invisible to the naked eye. From 2000, nanotechnology began

to be developed in laboratories on projects that have enabled

its application in a variety areas.

This technology is already present in our current life and will have

enormous impact in the near future. Some present day examples

are right in everyday items we encounter. Nanotechnology is

already used in the manufacture of sporting goods as shoes,

making them lighter and more resistant at the same time. It is

used in paints for automobiles, making paint more resistant to

wear from day to day. Companies such as HP, IBM, Toshiba, and

other manufacturers of storage and semiconductors are using

nanotechnology in their manufacturing processes.

In the near future in medicine, Nanomotors will be the basis for

the construction of nanorobots (nanobots). Nanorobots will be

introduced into the human body to find residual cancer cells

after surgery, localizing treatment and making it more effective.

Nanocameras might also be used to monitor health conditions

transmitting information for equipment through which the doctors

can diagnoses and define the best type of treatment for diseases.

or Based on the diagnosis, Nanorobots might take medications

directly to the target, avoiding undesirable side effects.

Nanotechnology is used in the manufacture of textiles, clothing

and shoes, specially treated to be able to repel liquids, avoiding

stains and dry faster. It is also used in the manufacturing of paper

diapers making them more resistant and of greater duration.

Future applications include the possibility of making t-shirts

lighter, resistant and even bulletproof.

In Information and Communication Technology, the use of

nanotechnology has resulted in the production of displays that

are thin and malleable. Its use is also possible in the construction of

biodegradable and clean batteries from living organisms (such as

viruses), some positively charged and others negatively, separated

by insulating material.

In the automotive industry, nanotechnology has resulted in the

development of lithium-based batteries. These batteries are

successfully leveraged in the production of hybrid cars with

financial and environmental advantages.

Robust and portable environmental sensors will be able to perform

chemical analysis and make decisions. The next generation of

electric power will be manufactured so clean, through the use

of carbon nanotubes, contributing to a more sustainable planet.

These are just a few examples of the use of the nanotechnology

in the near future. Numerous other areas such as food, defense,

electronics, cosmetics and traffic control, will be affected by its use.

I invite you to watch the selected videos below that illustrate

some of these innovative ideas presented above and that will

radically transform the world in which we live.

For further information: http://www.youtube.com/watch?v=KizHjy4U2vs

http://www.youtube.com/watch?v=7hZ5hinf9vo

http://www.youtube.com/watch?v=YqGkC5uJ0yM

Page 65: Transformation and Change

65

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

IT WITH SUSTAINABILITY AND EFFICIENCY Evandro Evanildo da Silva

The same technology that brings comfort to

our lives can often create waste and damage

our planet. It is clear that the environment

has been impacted by the unconstrained

disposal of electronic goods.

Electronic waste (e-waste), composed of

monitors, computer cases or other components, is often disposed

of incorrectly, piling up in nature and even on the streets of major

urban centres. It is already commonplace to find electronic remains

in public squares and streets and there is further environmental

damage caused by the heavy metals that make up batteries

and electronic components.

It is estimated that the world will produce around 50 million tons

of waste per year, which today is discarded all over the planet,

usually far from where it was originally produced. Often this occurs

clandestinely in the least developed countries.

A computer, for example, contains about 18% of lead, cadmium,

mercury and beryllium (lead is one of the most dangerous

metals). All these irregularly disposed of toxic materials present

an environmental problem today.

Hazardous substances contained in e-waste can contaminate

the soil, groundwater and other natural resources, in addition to

landfills, directly and indirectly affecting all forms of life. Technology

quickly advances without considering the disposal of artifacts

that become obsolete.

In addition to the concern about waste disposal, we need to

evaluate ways to improve the lifecycle of products, starting with

the use of more sustainable and less polluting materials in the

manufacture of new devices.

The exploitation of renewable energy sources, optimized use of

equipment, responsible disposal, improvements in management

and energy consumption, and recycling of electronic devices,

fall into what we call “the future in the era of Green IT”.

It is possible that one way to improve leverages the benefits of

cloud computing, which can greatly contribute by reducing idle

capacity, rationalizing usage and making IT more sustainable.

Hosting systems in a shared infrastructure can serve millions

of users in thousands of companies simultaneously, thereby

reducing the power consumption and the amount of e-waste,

with better use of existing equipment.

It is important to note that the servers that run at high utilization

rates consume more energy, but this is offset by the savings

through better utilization and distribution of processing and

memory workloads.

Many companies are adopting virtualization as a way of saving,

and they are investing in Cloud Computing to consolidate costs

in hardware and energy, while also improving the profiles of data

centers which now are gaining a new version called “Green”.

The “Green Datacenter” seeks to use alternative sources of clean

energy, such as wind, solar and ocean energy. The latter can

generate electric power through the kinetic energy of waves

while cooling down through heat exchange. This approach has

been applied in floating data centers which, due to their mobility,

mitigate the restriction of physical space in urban areas, today

a major problem for the growth or construction of data centers.

New research is helping the development of technology and

resource preservation by exploring sustainable methodologies,

so that technological progress does not negatively affect the

future of the environment.

For further information: http://convergenciadigital.uol.com.br/cgi/cgilua.exe/sys/start.htm?infoid=25420&sid=97

http://www.cpqd.com.br/highlights/265-sustentabilidade-eeficiencia-em-ti.html

http://info.abril.com.br/corporate/noticias/google-obtem-patentede-datacenter-flutuante-04052009-0.shtml

Page 66: Transformation and Change

66

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE STRATEGY AND ITS OPERATIONALIZATIONLuciano Schiavo

Which entrepreneur would not want to have higher profitability

and customer focus, reduce costs, have leaner processes and

employees with the ideal professional profile? There is not only one

way to achieve these goals, but it is possible to work with some

theories and methodologies that facilitate and simplify this task.

Michael Porter wrote in his article “What is Strategy” (Harvard

Business Review, pg 61-78, Nov/Dec, 1996) that the strategy is

the creation of a single and valuable position involving a different

set of activities. This position is also related to the decision of

the kind of activities that you should not do.

Also in this context, the decision about

outsourcsing services, such as IT, should

be considered, allowing greater focus on

activities directly linked to the business. To

Michael Porter, the cost reduction alone

is not a strategy but an auto-cannibalism,

because it compromises the profit margins

over a long period.

After the strategy definition, it should be

put in practice, and one of the ways to do this is through the

Balanced Scorecard (BSC) methodology, created by Kaplan and

Norton. They identified four perspectives that generate a lot of

value when used together. The financial perspective structures

which will be the success based on financial return. The client

perspective establishes how the organization wants to be seen

by its clients and brings the prospect of internal processes that

identify how they should be adapted to deliver the product or

service to the client. The learning and growth perspective allows

you to examine whether the company has all knowledge and

skills needed to deliver what was defined in the strategy.

The next step, after you create the goals for each perspective,

is to create KPIs (key performance indicators) which will make

it possible to follow the evolution of the strategy implementation.

Usually at this point, the contrast with the current company’s KPIs

shows that some efforts were not aligned with the company’s

strategy. In this phase, it is common to start projects with the

goal to create and collect some information for the new KPIs.

In 2010, there was a research (Harvard Business Review, Spotlight

on Effective Organization: How Hierarchy Can Hurt Strategy

Execution, Jul/Aug, 2010) who presented and categorized the

most significant obstacles to the strategy

implementation. The biggest offenders

were lack of time and resource constraints.

When considered the organizational

structure, the greatest difficulty was in the

translation of the strategy into execution,

the positions alignment and make this

strategy significant for the front line.

Other studies also identified problems

in conducting the BSC due to biases of

judgment to evaluate the performance of the indicators. The great

opportunity, and at the same time challenge, is to formulate what

will the strategy and what should actually be measured.

The advantage of following this approach of strategy plus indicators

is that executives can clearly see what is really essential and then

prioritize the projects correctly. Finally, this approach also helps

the company in pursuit of a single goal, aligning tasks, priorities,

communication and avoiding the “traps” of the micro-management.

For further information: http://www.isc.hbs.edu/

http://www.balancedscorecard.org

http://www.lean.org/WhatsLean/History.cfm

Page 67: Transformation and Change

67

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE EVOLUTION OF NASHélvio de Castro Machado Homem

The Big Data subject, increasingly present in Executive agendas,

and the massive growth of data generated every day make

businesses and IT service providers, including Cloud Computing

vendors, rethink their strategies for data storage. The technologies

for this purpose have evolved significantly, thus allowing a more

intelligent distribution of data at a lower cost.

A good example of this is the technology Network Attached Storage

(NAS), which arose in the early 90s solely to play the role of file

server and, since then, has been gaining new improvements

and features.

NAS uses Ethernet standard network topology, operating

with traditional twisted pair cabling. This has a low cost of

implementation as well as satisfactory performance. You can

still adopt the default networks that operate at 10 Gbps speed

for environments that demand high performance.

Beyond traditional file-based protocols, as of 2001 some

equipment that provides the technology also began to allow the

use of block-based protocols, characteristic of the SAN (Storage

Area Network), which also use standard Ethernet networks. The

first allows direct access to the file and directory structure, while

those based on blocks deliver the data in an encapsulated format

for storage system clients (for example, a database server) with

higher performance.

There are equipment options that provide NAS and SAN

technologies in an integrated manner, the latter through Ethernet

and fiber optics. These are normally referred to as Multiprotocol or

unified. Especially in scenarios where you cannot do without the

speed offered by optical fiber technology, having both technologies

together becomes quite interesting because, due to its great

flexibility, different requirements can be met at lower costs of

acquisition and maintenance.

Another technology that has advanced considerably is the Scale-

Out NAS, an evolution of the traditional NAS, which has a cluster

consisting of two nodes at most. The Scale-Out NAS is much

more scalable and allows the use of various nodes scattered

geographically, but that appear as a single device or access

point for the end user. This becomes especially important for

file storage services, such as those provided through Cloud

Computing. In them the user, when storing his data in the Cloud,

has no idea where it is being stored physically. The important

thing is that it can be accessed easily and quickly.

Scale-Out NAS usage in Cloud Computing, Big Data, social

media and mobility is the main reason why IDC estimates that

the market revenue of this technology is expected to more than

double by 2015 (from 600 million to 1.3 billion dollars).

According to IDC, the market for file-based storage in general has

grown significantly in recent years and this trend will remain at

least until 2015. To give an idea, in 2011, this market represented

approximately 72% of the marketed storage capacity in the world

and by 2015 it should reach 80%.

The combination of different storage technologies allows you

to compose a hybrid environment, with layers differentiated by

performance and protocol. This is the best way to meet the business

and technical requirements and optimize data storage costs.

For further information: http://www-03.ibm.com/systems/storage/network/

http://en.wikipedia.org/wiki/Network-attached_storage

Page 68: Transformation and Change

68

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

GO TO THE CLOUD OR NOT? Antônio Gaspar

“Everything goes into

the cloud now.” It is

very likely that you

have come across

this phrase. Cloud

computing allows

prerogatives of sca-

lability, elasticity and

fast provisioning, not to mention the promise of reductions in

costs. This promotes high expectations and euphoria in the

market place. This is all possible, it’s real, but has conditions.

Like Milton Friedman said, “there is no (...) free lunch".

So, is it really true that everything goes into the cloud? The most

reasonable response would be: it depends. In other words,

it is necessary to evaluate the functional and nonfunctional

requirements of each workload (applications and other systems)

that are candidates for cloud. On the other hand, it is also necessary

to verify adherence to standards and intrinsic requirements of a

service in the cloud. Let’s explore some of the aspects, therefore

qualifiers, in an eligibility review process for the migration of a

workload to cloud.

Virtualization. Is one of the three fundamental pillars of cloud

computing, besides standardization and automation. In analyzing

a workload portability to cloud, it is important to check compatibility

with the hypervisor system (software layer between the hardware

and the virtual machine), made available by the service in the

cloud. This detail may seem irrelevant but makes all the difference,

especially to ensure third-party support for the application in a

virtualized environment in the cloud.

Computational capacity. Particularly applies when adopting the

IaaS (Infrastructure as a Service) model of cloud. It is necessary to

estimate storage and processing capabilities that will be demanded

versus those that can be made available by the resources

in the cloud.

Features. Intrinsic in adopting PaaS (Platform as a Sevice) and

SaaS (Software as a Service) models in cloud, is verification of

the functional capabilities and possible parameterizations of

cloud service, in order to assess adherence to the respective

functional requirements of business applications.

Software licensing. This aspect has direct impact on TCO

(Total Cost of Ownership). Software providers are adapting and

establishing licensing policies of their products, specifically

aimed at use in a cloud environment. Although not an actual

category, meeting the technical licensing policies is a critical

factor in the analysis of eligibility because of unforeseen costs

in the risk mitigation of post-migration tasks for the cloud.

Interoperability. With the diversity of models and cloud providers,

heterogeneous ecosystems could develop in which workloads

are distributed among traditional environments and one or

more clouds. Therefore, it is necessary to evaluate the degree

of coupling, representing the level of dependency between the

various distributed functional modules. Modules with a high degree

of coupling, running on geographically distinct environments,

require special attention, for example network latency and impacts

of outages in “isolated clouds”.

Service levels. Each workload has an associated criticality aligned

to business requirements. You need to check if SLAs (Service

Level Agreement) provided by the cloud service provider meet

these requirements.

Security. This is a topic that certainly deserves more space and

discussion. Basically, emphasis is placed on the guarantee of

confidentiality, access control to data and due to regulatory

constraints, the repository location in the cloud.

It is important to note that these qualifiers vary in their relevance

according to the type of cloud adopted. Private Clouds are normally

implemented and guided by the company’s policies, increasing the

eligibility spectrum of workloads. Specifically in public and shared

private clouds these qualifiers are more relevant. Understanding

the workloads and cloud services is therefore crucial to adoption

of cloud computing. This new concept shatters paradigms of the

current models in providing services. It is real and irreversible,

promoting an unprecedented transformation in organizational

models, processes and information technology.

For further information: https://www2.opengroup.org/ogsys/catalog/G123

https://www.ibm.com/developerworks/mydeveloperworks/blogs/ctaurion/tags/cloud?lang=en

Page 69: Transformation and Change

69

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

PROFESSION: BUSINESS ARCHITECTMarcelo de França Costa

In the late 90s I used to be called an Systems Analyst. This title

allowed me to act in all phases of the development cycle of

software, from requirements gathering to the architecture of the

solution, and from modeling of the data through implementation

and testing. Today, however, we find careers more and more

specialized. In any new area of Informatics, the occupations

related to it and their specializations are constantly evolving in

response to the market. One of the latest occupations, with only

a decade of existence, is the business architect.

Business Architecture and its “sister” discipline Corporate

Architecture are responses to a comprehensive need of the

market to align Information Technology (IT) with business strategy

and goals. Terminology aside, many people agree that the

basic difference between them is

focus. The first one is interested in

understanding a business’s macro

plan, supply chain, operating

model, value chain, and the gap

between today’s state and the

desired mission and vision for

the company. The second one

starts with business goals and

the strategic IT vision including

governance, the project portfolio,

infrastructure, people and systems.

Leaving aside the differences, both

are need to support business

processes with IT capabilities in

an optimal manner. At the center of the figure (taken from the

U.S. National Institutes of Health) illustrates a framework where

Business Architecture is shown as a part (discipline) of the

Corporate Architecture.

Regarding professions, another comparison is related to the

Business Analyst and Business Architect. While the Business

Analyst is usually, interested only in the processes of a business

unit or department, the Business Architect is concerned with

modeling and analyzing the enterprise as a whole.

The discipline of Business Architecture is growing in importance,

as the demand for professionals with both an IT orientation and

business skills (for example, graduation in administration (MBA

or Degree in Production Engineering) increases. For Alex Cullen,

an analyst at Forrester Research, “it is a role built around the

business planning, finding opportunities for use IT more effectively”

in sales, client services and other critical areas. According to

InfoWorld, it is currently one of the six most attractive careers in

IT, with great potential for growth in the next years.

Like any other professional, the business architect also uses a

specific set of tools. In this regard, many companies are adopting

TOGAF (The Open Group Architecture Framework) to deploy

and evolve their architectures. TOGAF had its origin in the DoD

(U.S. Department of Defense), and comprises methods and

tools considered best practices. It is structured in phases, and

specifically deals with Business

Architecture (Phase B), examining

how the company must operate to

achieve their goals.

An activity performed during this

phase is the creation of models.

ArchiMate is the standard lan-

guage that supports describing,

analyzing and visualizing the rela-

tionships contained in the fields of

business. Such models illustrate

different aspects (view points) in

various levels of abstraction, from

the relationship with customers and

suppliers to internal aspects such as the technological platforms

that support business processes.

Briefly, I tried to introduce the discipline of Business Architecture,

as well as the role of the Business Architect. For both professionals

and companies, this is an opportune time to develop expertise in

this area of knowledge as the market demands greater alignment

of business and IT.

For further information: http://www.businessarchitectsassociation.org/

http://www.opengroup.org/togaf/

Page 70: Transformation and Change

70

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

FOUR HOURS?Sergio Varga

Imagine the following sequence of actions:

(1) leave your house, take your car, go to

the mall, buy a movie ticket, watch the show,

get your car in the parking lot and go back home; or (2) leave your

house, take a cab, go to a soccer stadium, buy a ticket, watch

the game and take a cab home. How much time, on average,

would you take to execute these actions? Let’s say four hours is

a reasonable amount of time.

On the other hand, imagine a company seeking to put a sales

website on a new server in their datacenter. How much time is

needed to enable such task, from the installation of the equipment

in the datacenter to the launch of the sales website to be used

by the user? A month? A week? A day? Four hours?

Who said “a month”? You were certainly thinking of the traditional

model of IT services, where it is necessary to install the server in

the datacenter, configure the network connections and storage,

install the operational system and set it up, install and customize

the web server and database and finally install the web application.

Not to mention the allocation of professional resources from several

fields of support, such as networking, storage management,

server management and others.

Who said “a week”? Maybe you thought about a server already

installed in a datacenter, possibly virtual, using standard images

and previously created, with an operational system and, eventually,

even with the installation and configuration of the software and

applications.

The more optimistic person, who said “a day”, certainly considered

an environment previously configured in test, requiring only

minimum customizations to enable the production system, or

an environment in private cloud with images already defined

and configured, requiring only the installation of the application.

What if there was the possibility to enable

this application in only four hours? Many

would say that is an impossibility and you

are dreaming! However, today, it is already possible.

Solutions exist today from some companies who provide , through

the integration of various network technologies, server and storage

in a single chassis. This single chassis has an an automation

layer making it is possible to quickly deploy applications in a

few hours. These solutions consolidate the knowledge of many

professional and demands a smaller contingent of technicians

to administer them and support them.

It is technology at the service of technology; or another way to

say itis “it’s technology at service of IT management”. This has

already occurred with the use of robots, but now is happening

in the area computer systems.

In a world highly connected and intelligent, the ability to react

in a quick way to change can be a competitive advantage. And,

surely, we are seeing new solutions that might arise based this

technological concept. For example, in the area of business

analytics, there are custom solutions for specific industry segments

or cognitive systems with integrated solutions involving knowledge

of a particular area of business.

With this technology also comes a new type of professional:

the administrator of integrated systems. This person needs to

understand the various technologies used and various disciplines

of management including user management, security, and

performance monitoring. Could we be entering a new era of IT

management?

For further information: http://www.youtube.com/watch?v=g9EGP2tkoQw&feature=colike

http://tech.journeytofrontier.com/2012/04/ibm-unveils-puresystems.html

Page 71: Transformation and Change

71

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

IF YOU PUT YOUR REPUTATION ON THE WINDOW, WILL IT WORTH MORE THAN $ 1.00? Wilson E. Cruz

Often a commonplace and

seemingly insignificant fact

is the trigger or the catalyst

for an idea. Everything fits,

even something quite trivial, for

example a simple exchange

of stickers from the album of

my son over the Internet.

The site is simple: the user

registers themselves, registers

an album that he/she is

collecting, and identifies repeated and missing stickers in the

collection, The site takes care of doing the “match”, i.e. offering

the possibilities for exchange, which are obviously concluded in

the real world with the sending of repeated and desired stickers

done by mail. The crux of this transaction is the following: how to

trust that person who says they will send the stickers you need?

The resolution of this problem, on that site, is simple and remarkable:

each time you close a transaction, for both sides a pending

evaluation is generated, that is solved when the recipient declares

that he/she has received the stickers as agreed and, therefore,

is satisfied with the sender. By registering the receipt, a score is

generated for the sender. The accumulation of points translates

into reputation levels, represented by a symbol that is attached to

the user’s personal profile, and appears even when a change is

being proposed. When evaluating an exchange, the reputation

of a user appears clearly and influences the decision of the

other party. Trading with an “Archduke” who has done more than

2,000 exchanges is safer and more assured than switching with

a “Pilgrim” who does not have any point.

What does the site do about beginners and their early exchanges?

When a beginner has no points, of course there is no reputation.

The solution is simple. Those with no points are invited to send

their stickers beforehand, in such a way that the other waits to

receive them, evaluates positively, thus generating the first points

for the sender and only then concludes their half of the transaction,

solving the problem of lack of initial reputation.

This process brings to light an important insight: reputation in the

virtual world is the repetition of successful interactions. It can be

an interaction of exchange, but it could also be, on another site,

the correct answer to a question, the timely payment of a debt

or the efficient delivery of a service.

Can we assume that someone who makes hundreds of successful

exchanges is always a good payer of debts or commitments?

Likewise, is someone who correctly answers many questions about

a subject also a good provider of services related to this topic?

These questions generate a huge field of opportunity regarding

using reputation to inform business dealings: a retailer could

include the reputation registered on the site of exchanges to

strengthen the credit analysis of a person who wants to buy a TV

on installment. The citizen interested in hiring a good cabinetmaker

to build his living room furniture could start their selection on the

websites of people keen on woodworking, fetching those who

are the most frequent, loyal and who answer to the questions

more competently.

Multiple characteristics, competencies or virtues give rise

to multiple reputations, or practically a “virtual curriculum” of

reputations confirmed by successful virtual interactions in various

fields. Can you imagine what the value that this curriculum, well

managed, can have for those who want to carry out activities

and business on the net?

The collection of data seeking to quantify the reputation is already

a reality, but each approach has its own formula, not necessarily

the correct or most useful one. No one, yet, has done something

really innovative in the area of management and exchange of

reputations for sustaining transactions of commercial value. Is

this an opportunity for the next billionaire?

For further information: http://www.trocafigurinhas.com.br

http://trustcloud.com

Page 72: Transformation and Change

72

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

WHAT IS INFORMATION SECURITY? Avi Alkalay

Did you know that security has been identified in recent years as

one of the subjects that generates most interest in the IT market?

Technology providers like to raise this theme in the media and at

technology events because they have many security products and

services to offer. There is a great deal of FUD (Fear, Uncertainty

and Doubt) used to promote the sale of security technology, just

as there is in areas such as personal security and armored cars.

If a malicious vulnerability is exploited inside a company, the

person accountable for security will be severely punished by

his superior. An approach to mitigate this situation seems to

be acquiring as many security products as possible so as to to

be relieved of guilt in case of any incident.

It is also a fact that the more security products

a company acquires the more products will

have to be managed, but this does not

necessarily mean that the company will be

safer. In fact, this might increase the chance

of being unsafe due to the complexity of the

operational environment.

So what is security? A definition that I like is

“IT security must be interested in everything

that covers the confidentiality, availability and

integrity of information”. This definition has

obvious derivations such as: “We are unsure if someone from

outside can see the inside information of our company”; “We are

unsure if our data disappears”; and “We are unsure if someone

maliciously modifies our information”.

But what many overlook is that the information can be exposed,

lost or damaged by operational factors and not just malicious ones.

For example, exposures could be created through a crowded

disk or a misguided configuration of some software which has

nothing to do with security. An internal application developed by

an inexperienced programmer can consume all the processing

power of a server, leaving your service, and consequently the

information, unavailable.

Implementing measures such as firewalls, passwords or even

encryption is not sufficient to provide security. None of this is

effective if the IT operation is in inexperienced or incompetent

hands. Enterprise IT security must be a perennial value for all

participants along the flow of information, i.e. all employees of

a company. It is an end to end process and therefore must be

present from the development of an application by a programmer

until its use at the end user’s desk.

The initial step is to adopt a method. The second is to apply it in

the area of development of applications which, designed with

security considerations, make it easier to ensure real security

later. A good practice is to not reinvent the wheel every time

a new program is being written. The use of a mature market

framework such as Java Enterprise Edition can help solve these

problems and abstract levels so that the programmer doesn’t

need a corporate approach. I often say that

security is a synonym of organization. Is it

possible to conceive of a safe disorganized

data center ? Will we do a good job if we

organize IT without thinking about security?

There is no security without organization

and vice versa. It is also common to find

companies in which security has such

emphasis (sometimes to neurotic levels),

that doing some types of business becomes

prohibitive, because “it’s unsafe".

A common example of that is not allowing

the use of chat messaging tools or social networks. However,

once this decision is taken an opportunity might be lost to create

relationships with customers or partners who use such tools.

Is it good or bad to allow that kind of openness? Experience

has shown that the overall result is positive when it enables

communication between people.

The paradox is that companies only do business when your

employees communicate with the outside world and the natural

impulse of security is to restrict such communication. Protecting

the information doesn’t mean making it unavailable. Therefore,

finding a happy medium seems to be the way to manage the

security in IT responsibly, consciously, with an open mind and,

above all, innovatively.

For further information: http://WorldOfEnds.com

Page 73: Transformation and Change

73

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE MATHEMATICS OF CHANCEKiran Mantripragada

“I am convinced that He (God) does not play dice.” Despite his

contributions to the birth of quantum mechanics, Albert Einstein

couldn’t accept its probabilistic formulation. That became clear

when Einstein wrote those words to his friend Max Born, in an

attempt to refute the mathematical development from Werner

Heisenberg, responsible for the Principle of Uncertainty. The

statement shows Einstein’s difficulty in accepting that nature

might be something either unpredictable or random. Current

science teaches us that Einstein was wrong about this issue.

Unfortunately, these words have become well known outside the

scientific world and are often used in religious or philosophical

debates, perhaps in a misguided way, to try to justify the existence

of a destiny or a predetermined future.

But Probability Theory hadn’t even been formally born when Einstein

made his statement. This mathematics for describing chances

was only conceived in 1957 by Andrey N. Kolmogorov, a few years

after Einstein and Heisenberg. On the other hand, the concepts

of probability, randomness, chance and unpredictability have

been part of the common experience since classical antiquity.

For a long time these notions have been used in several

circumstances and places, such as gambling, casinos, dice

games, coin toss, divination, business decision-making, risk

analysis, and even in legislation.

However, it is common for human beings to make mistakes

when subjected to the notion of chance. A classic example is

the “Gambler’s Fallacy” in which players hold a common belief

that, after a string of losses in games of chance, a string of wins

always happens (and vice versa) as a sort of auto compensation.

But what does randomness mean? Do random or impossibly

unforeseen events exist in nature?

Even before Kolmogorov, it was already common to toss a coin

to show the concepts of unpredictability and chance. It is known

that even with 50% chance of dropping on a particular face, you

can’t say for sure what the next result will be. That does not mean

that the math is wrong. It only proves that in an infinite amount of

moves, the number of appearances of a particular face tends to

be 50%. Still, if the reader wants to be pragmatic, he can affirm

that this infinity must be an even number, because if it is an

“infinite odd” the value will never be exactly 50%.

Controversies aside, can we say that randomness really exists in

nature? For example, it can be said that the game of coin toss is

described by classical mechanics of Newton, i.e., if all initial and

boundary conditions (such as initial velocity, force, wind, friction,

mass, center of mass of the coin, etc.) are precisely known then

you can calculate which face will fall facing up.

Actually, this is exactly the problem of the weather forecast - any

instability or inaccuracy in initial conditions can bring differing

results. This is the “butterfly effect”, which is related to Chaos

Theory and thus differs from the concept of randomness.

And what about on the computer? Have you ever wondered how

to “generate” a random number? A computer scientist knows

that to generate a random number is not something trivial, so it

is common to use the term “pseudorandom” for these artificially

generated numbers. In short, the computer needs a formula to

generate numbers. But if there is a mathematical formula for it,

then the number generated is essentially not random, as it can

be calculated in advance.

This article does not aim to reach conclusions on the topic, but

to provide inputs for further discussions, perhaps in a bar with

friends. And for that, how about we start with the statement:

“Probably Nature is not deterministic”.

For further information: Article: What is a random sequence? (Sergio Volcham)

http://pt.wikipedia.org/wiki/Aleatoriedade

Image obtained on the website http://filipinofreethinkers.org/

Page 74: Transformation and Change

74

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE ORIGIN OF THE LOGICAL DATA WAREHOUSE (LDW)Samir Nassif Palma

Information has a constantly growing value in organizations.

Large volumes of data are handled daily with the ultimate goal

of supporting decision-making processes.

The story of data storage and information management began

30 years ago with Decision Support Systems (DSS). These were

then replaced by Data Warehouses (DW), which became the

cornerstone for analytics and Business Intelligence (BI). Next,

the DWs grew to be larger, corporate-wide entities, so that all

departments became suppliers and consumers of information

in a structured environment.

The next step in the evolution of data storage was the Enterprise

Data Warehouse (EDW). However, as the volume of data and the

number of consumers grew, the response performance of these

systems became the limiting factor of the actual value of the

analytical environment for the company. If information cannot be

obtained in the required time, then it no longer has significance

for the business. The latency of information has become a critical

requirement for information environments.

This requirement has generated investments in technological

features, such as more powerful processors, faster networks,

magnetic disks with partitioned storage, access parallelism, etc,

all of which have provided better performance to end users.

However, such gain does not last long. As the value of the data

grows, so does the quantity of hits by the customer base. The

more hits, the greater the impact on performance, thus counter

balancing the technological gains.

Another factor comes into play when each business unit creates

its own processes and adopts its own data storage technology,

often outside the standards set by the corporate IT department.

These new types of data must be processed and consumed, and

represent high value to the end user. They are often unstructured

data, estimated at 80% of the total volume, including emails, texts,

spreadsheets, posts on social networks, blogs, videos, etc. The

percentage itself indicates a jump in total volume, today measured

in zettabytes (10²¹ bytes). This ‘Big Data’ has become the latest

protagonist in the information management story.

In 2009 the Logical Data Warehouse (LDW) concept emerged

to meet these challenges. The LDW allows for an integrated and

comprehensive view of all information assets of the organization,

encompassing data stores that are supported by different

technological resources across multiple platforms. The concept

proposes a new decentralized data aggregation, in contrast to

the centralized EDW model. The LDW is composed of multiple

repositories of data, distributed process elements, decentralized

workloads, specialized platforms, data virtualization, and efficient

metadata management.

Metadata, data that describes and explains data, becomes

essential in this vision, especially in the orchestration of the

accesses to the various databases and assets that store the

data requested. New intelligence is required to define which

element of the environment will respond to the requested demand,

and new governance is required to manage the catalog of

information.

The LDW protects the organization’s legacy investments in data

platforms (e.g. different data bases and suppliers, different file

systems, etc), while still allowing for new investments in specialized

demands (appliances, for example). For the business, the LDW

represents adaptation and response to growing informational

requirements of the market, which are often characterized by

high volumes and variety.

In conclusion, information is a valuable asset for an organization.

The data itself does not be centralized, but its governance, which

includes the metadata management, control and administration

of information, must remain centralized.

For further information: http://www.ibmbigdatahub.com/blog/logical-data-warehouse-smart-consolidation-smarter-warehousing

Page 75: Transformation and Change

75

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

STORAGE & FRACTAISMárcia Miura

When I received the invitation to visit IBM’s Storage Laboratory

in Tucson (AZ), I imagined a room full of benches crowded with

nerds hunched over equipment with its insides hanging out...

would I be able to communicate with those scientists and learn

something?

This initial image was only one aspect of my experience, which

was nothing less than fascinating. I work as a designer of storage

solutions, which requires the consideration of practical aspects

such as cost, performance and architecture, always from a

business perspective. The design of a storage solution consists

of data modeling and analysis of the customer application’s

behavior. Thus, from the scientific perspective I am basically

an end-user of tools and products that

have been extensively studied and

tested in the lab.

The first meeting I was invited to was about

the behavior of data in cache memory

and its mathematical representation in a

new disk subsystem. The influence of a

new level of caching is verified through

performance measurements under several

different types of read and write workloads.

As in quantum physics, which studies the

behavior of electrons and tries to describe

it through equations, the behavior of cached data also needs

to be studied and described by equations that are entered into

modeling software that performs simulations. This study requires

measurements with different variables until a conclusion is reached

about the impact that the solution will have on storage customers.

During the discussions with experts in the behavior of cached

data, I was stunned to learn that fractal theory can be used to

model data access patterns in memory, including cache.

Benoit Mandelbrot (1924-2010), a researcher at IBM, showed in

1975 that any shape in nature could be described mathematically

in fractions that he called fractals. Any irregular shape such as the

structure of a cloud, a mountain, a broccoli floret or a pulmonary

alveolus can be endlessly broken into self-repeating fractions

forming a pattern. Mandelbrot investigated graphics generated

from data transmission errors and noticed that the error patterns

were equal for periods of one day, one hour and one second.

The microscopic view was a repetition of the macroscopic vision.

This discovery had an impact in several other areas such as

tumor diagnosis, generation of special effects for science fiction

movies (Star Trek was the first to use this technique) and in the

design of antennas for mobile phones.

Organizing storage solutions around a memory hierarchy including

cache brought significant performance improvements, but it

also triggered the necessity for complex cache management

algorithms. Bruce Mc Nutt, a senior engineer in IBM’s Storage

Division, observed a repetitive pattern in data access traces

produced by a mainframe and presented

it in the book “The Fractal Structure of

Data Reference”. The access profile of the

server’s memory was also observed in the

processor’s buffers, in the central memory

processor, in the subsystem’s disk cache

and in the physical disks. With this finding,

software developers, hardware architects

and product architects can design

intelligent algorithms that are able to

optimize the usage of all different levels

of memory resulting in better performance.

Storage solutions tend to be increasingly

intelligent and integrated with software and, for that, knowledge

of the access patterns is fundamental.

It was hard to imagine that those colorful and graceful structures

could explain so many things in nature and in our day-to-day life

in technology. From the philosophical standpoint, it is possible to

say that there is always a new way of seeing the world (Euclidean

geometry did not allow this vision) teaching us that a small part

can represent the whole.

For further information: Fractals – Hunting the hidden dimension

The Fractal Structure of Data Reference, Bruce McNutt

TCL-BR MP #123 - ibm.co/16sDsuQ (Portuguese)

Page 76: Transformation and Change

76

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SOCIAL BUSINESS VERSUS SOCIAL BUSINESS MODELRodrigo Jorge Araujo

Do you really know the meaning of the term Social Business?

How it was created or how it is used in the market?

The term Social Business was established over 20 years ago by

the Economist, winner of the Nobel Peace Prize, Prof. Muhammad

Yunus. It defines a socio-economic development model which is

based on a philosophy of investment in capacity of people and

companies to become self-sufficient, inventive and entrepreneurial

with the aim of mutual development.

In the definition of Yunus: “A social business is a company without

losses or dividends, designed to achieve a social objective within

the highly regulated

market of these days. It is different from

a nonprofit organization because the

business must seek to generate a modest

profit, but this will be used to expand the

company’s reach, improve the product

or service or other ways that subsidize

the social mission”.

Some principles were created to define

the Social Business, according to Yunus:

• The business goals are not to

maximize profit, but to overcome

poverty and other problems that

threaten people, such as education, health, access to

technology and environment.

• Economic and financial sustainability and environmental

awareness.

• Investors receive back only the amount they invested;

no dividend is given beyond that amount the company’s

profit remains in it for expansion and improvements.

• The manpower involved receives market remuneration,

with better working conditions.

• Do with joy.

On the other hand, Social Business Model (frequently called

as Social Business) is a recent model applied to businesses

that have adopted social networking tools and practices for

internal and external functions within their organizations. Its goal

is generating value for all stakeholders, such as employees,

customers, partners and suppliers.

In this new business model, companies must increasingly hear,

understand and respond to its customer’s needs, at the same time

the consumers increasingly want to know about the reputation,

integrity and ability of companies to meet your requirements

and needs. If this interaction is not efficient, the market loss risk

is high and real.

The e-Commerce has changed the way people and companies

do business, the Social Business is changing the way the parties

are reputable, which directly affects their ability to maintain active

in the market. It is a remarkable change in the way companies and

individuals relate among themselves.

For this reason, more and more

companies seek high-speed co-

mmunication solutions, social net-

works, cloud data storage and

analysis of large volumes of data

that help to understand and commu-

nicate with their customers and

business partners.

In this scenario the technology

plays an essential role in supporting

and managing the new social and

commercial interactions that will be

not options but will be essentials for business success.

And, as in the past, new areas and opportunities are beginning

to emerge, as well as the need for specialized professionals in

various disciplines. Have you ever imagined yourself in a strategic

meeting with the Director of Online Marketing or involved in a

project with the Manager of Communities and Social Networks?

For further information: Book - Building Social Business: The New Kind of Capitalism that Serves Humanity’s Most Pressing Needs. [S.l.]:PublicAffairs, 2011. 256 p

http://bit.ly/1090gcP

http://bit.ly/16UcoFi

http://onforb.es/11Eem8Q

Page 77: Transformation and Change

77

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SCIENTIFIC METHOD AND WORKGregório Baggio Tramontina

Most of the time we do not realize it, but we apply at least part of

the scientific method in our daily lives and in our work. It helps

us to solve problems and to provide arguments and justified

information when necessary. But what is the scientific method

and why is it important to know?

The scientific method is a set of techniques employed in the

investigation of the phenomena that surround us in order to either

generate new knowledge or

to adjust and fix what we

already know. It is an empirical

effort based on measurable

evidence, and although the

specifics vary according to

the field of knowledge, we

can identify two common basic

elements: a set of hypotheses

and their validation tests.

After observing a phenomenon,

a scientist proposes one or

more hypotheses to explain

it. The hypotheses do not come

out of nowhere, but from what

is already known about the

phenomenon (or similar phenomena) and are also subject to

plausibility analysis. After setting the hypotheses, scientists

propose tests to either validate or disprove them. The tests should

be reproducible in order to enable independent verification and

must be conducted over very objective and controlled conditions in

order to avoid the generation of biased results. If valid, hypotheses

are able to accurately forecast certain values, behaviors or new

facts about the phenomenon. These predictions can also be

validated with further tests and observations, making the research

even more grounded.

This process is finished when it finally delivers a theory. The

word theory has a different meaning when used informally and

in science. Colloquially, a theory is simply a “hunch” about the

explanation of something, without the need for further confirmation.

Scientifically, theory has a stricter connotation, consisting of a

body of established knowledge supported by concrete evidence.

Notable examples are Charles Darwin’s theory of evolution and

Albert Einstein’s relativity, which still provide verifiable explanations

for a wide range of natural phenomena and are able to pass

newer tests they have been subjected to.

Of course, some research works do not lead to a completely

new theory, but instead to proposals of adjustments to existing

knowledge, to confirmations of new aspects of a theory or even

to proofs that important concepts, in light of new evidence, are

actually incorrect (see the case of aether, a substance which

was believed to be the

means for the propagation

of light and whose existence

was refuted by the famous

Michelson-Morley experiment

in 1887 − see the link in the

section “to learn more” for more

information).

In our work we often have to

face situations that can only

be solved with the support of

more accurate analysis. These

are the moments in which skills

such as critical thinking are

most sought. The elaboration

of hypotheses and their related

tests comprise the kernel of our investigative process.

Moreover, it is possible to trace a direct relationship between what

we do to solve our professional challenges and the elements

of scientific method. Therefore, a deeper knowledge about it

will provide us with the opportunity to improve the results of

our work, which is reflected in all the derived factors such as

customer satisfaction. An example might be identified in production

support teams, where an adequate analysis and a rapid and

correct resolution of a problem can be the key aspect defining

the success or the failure of a project.

The scientific method has the ability to improve our work and its

lessons have great range and immediate application. Therefore,

it is worth knowing and applying.

For further information:What Was the Michelson-Morley Experiment?

Understanding and using The Scientific Method

Wikipedia - Epistemology

Page 78: Transformation and Change

78

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

WHAT IS THE SIZE OF THE LINK?José Carlos Bellora Junior

In every IT infrastructure project there’s a very common question:

what is the size of the link? Indeed, as new systems, users and

locations needed access, the capacity of the network links (also

known as data-links) in providing a good service has often been

questioned. The planning and management of the communication

networks’ capacity can be facilitated if the traffic involved is

predictable, or can be measured in order to bring it closer to a

standard model. Determining the behavior of the traffic through

measurement is a fundamental requirement for the estimation

and management of the resources in a data network.

Measurement and modeling

of traffic have been carried out

ever since there was the need for

remote computers to exchange

information with each other.

The data traffic has periods of

“bursts” followed by long periods

of “silence”. This characteristic

is observed with measures

on various time scales (from

milliseconds to minutes), which

characterizes the self-similarity

of traffic. The importance of this

behavior is in the fact that it is difficult to determine a natural

scale of time for the estimation, because the real traffic does

not converge to a medium value in larger scales. This invariant

characteristic of the bursts results in low utilization of the network

resources for any kind of service. This means it is necessary

to leave an idle band to accommodate the traffic in possible

periods of bursts.

The inefficiency in the use of communication channels causes

the technology to be employed on the basis of the principle of

dynamic sharing of network resources (routers, switches, links).

The data communication between computers are multiplexed

into a single channel, in an undetermining way, with reserve of

time, but rather at random (statistical multiplexing) way, so that

the access is immediate at any given time and for any duration.

This way, computers can communicate by exchanging messages

through shared links, without the need for dedicated circuits.

Studies show that the response time of the network is directly

influenced by the size of the message, which leads to the need for

smaller sizes so that the transmission time can be optimized. This

concept causes the communication to be executed through the

exchange of small segments of information known as packages,

the essence of current networks.

Obtaining the necessary data for a precise characterization

of traffic in high-performance networks is essential for the

development of new technologies, capacity planning, engineering

and management of network traffic. Most of these activities require

a template to make a prediction

of short or long term traffic.

Currently, network administrators

use measurements based on

Simple Network Management

Protocol (SNMP) existing in

the own network components

(routers and switches) or packet

monitoring, for which specific

equipment is required to capture

and store data (sniffers). These

measurements allow varied infor-

mation about the traffic to be

obtained with a greater or lesser level of detail, depending on

the method employed. It is important that the network designer

has information that points out prevalent characteristics of traffic

and usage patterns of applications that help identify potential

problems, such as congestion.

Now, whenever you question what size your link should be, think

how much this will depend on the pattern of network traffic.

For further information:http://www.ibm.com/ibm100/us/en/icons/watson/

http://www.ibm.com/innovation/us/watson/

http://ibm.com/systems/power/advantages/watson

Page 79: Transformation and Change

79

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

NOSQL DATABASES Claudio Alves de Oliveira

Although the NoSQL concept emerged in 1998, it is still not well

known, even among tecnology professionals. To understand

NoSQL, it is necessary to consider the broader subject of Big Data

that has received considerable attention from IT managers and

entrepreneurs due to its importance to operational and strategic

decision making, and its potential to generate new businesses,

product lines and information consumption needs.

To deal with enormous volumes of data and take advantage

of it in the best way, technologies that support Big Data have

been created such as NoSQL for database infrastructure, Stream

Computing as a new paradigm, and Hadoop and MapReduce

for data analysis.

NoSQL (Not only Structured Query Language) is a generic

term for a defined class of non-relational databases that have

characteristics called BASE (Basically Available, Soft state,

Eventual consistency). This class of non-relational databases

distributes the data in different repositories, making them always

available while not worrying about transaction consistency.

It instead delegates that responsibility to the application, ensuring

that data consistency is handled at some point after the transaction.

This concept is exactly the opposite of the main properties

of traditional RDBMS (Relational Database Management

System),that are atomicity, consistency, isolation and durability,

also known as ACID.

NoSQL doesn t break with the “Empire” of relational databases,

but instead complements them, since both technologies can

exist together.

Among the advantages of the NoSQL databases over the relational

are their ease of vertical scalability (increase of resources within a

server) and horizontal scalability (increase in the number of servers).

This easiness benefits developers who are more concerned about

their applications and less on maintenance. This is one of the

biggest reasons why NoSQL databases spread quickly among

the largest web applications running today.

As it was designed for distributed data storage on a large scale,

large companies dealing with search engine and social media

services benefit greatly from NoSQL technology, and studies

indicate that its adoption is growing fast.

It is the business needs that define the whether the NoSQL or

RDBMS approach should be used. A few comparison criteria have

to be used, such as system scalability, data consistency issues,

the usage or not of a query language and overall ease of use.

The relational databases have been on the market longer, therefore

they are more mature and robust, but have some limitations. On the

other hand, NoSQL, while still going through standards definition,

is a key element for the success of initiatives around Big Data.

For further information:http://www.google.com/trends/explore#q=NOSQL

http://www.ibm.com/developerworks/br/data/library/techarticle/dm-1205bigdatauniversity/

Page 80: Transformation and Change

80

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE CHALLENGES OF THE INTERNET OF THINGSFábio Cossini

In his article “The Internet of Things” (2011), José Carlos Duarte

Gonçalves presents us the evolution of the Internet and the concept

of what is known today, among other names, as the Internet of

Things. Evolving from a historical man-machine interaction via

browsers, this new Internet enables the connection between

objects, people and the environment that surrounds them, resulting

in the exchanging and processing of information for action-taking,

often, without human intervention. However, as in the beginning

of any new technological era, there are many challenges to its

consolidation and use with wide acceptance.

The application of the Internet of Things has already changed

the everyday life of thousands of people around the world. The

Spanish project SmartSantander has transformed the city of

Santander into an outdoor research lab. It has brought real

benefits for researchers with their pilot projects and citizens

with information on traffic, parking slots, places for loading and

unloading supplies, temperature, humidity or noise pollution.

There is already research in medicine for monitoring Alzheimer’s

or diabetics patients through the Internet of Things. With sensors

implanted directly into the patients bodies, those patients may, in

the near future, send information to applications that require more

efficient and assertive drugs to meet individuals according to the

diagnosis received. In the case of Alzheimer’s disease, efforts

are focused such that patients can also lead a more independent

life in terms of geographical mobility through monitoring.

For commercial applications, the insurance business will be one

of the most affected since the measurement of individual habits of

policyholders could lead to a personally priced policy. In addition,

insurers can mitigate risk individually by suggesting information

to every single insured to protect her or him from eventual claims,

such as avoiding regions of the greater probability of auto theft

or offering residential surveillance at distance by semi-invisible

motion sensors connected to the Internet.

However, so that the benefits of the Internet of Things come true fully,

some obstacles must be eliminated. The first one is the diversity

of existing standards for communication between objects. The

CASAGRAS2’s project, sponsored by the European Community,

identified 127 published standards and 48 in development in

its final report of 2012. Those standards covered 18 areas that

ranged from protocols of radio frequency (RFID) to communication

standards for specific industries, such as health care.

With the exponential growth of objects that can communicate with

each other, the unequivocal identification of each one becomes

imperative. IPv6 was born in this direction, once the new 128-

bits addressing allows the identification of 79 octillion times

more addresses than IPv4, that means, more than 56 octillion

addresses per inhabitant on the planet (6 billion).

As a result of the number of objects that can be connected

collecting and processing information, there is an increasing

need for storage. The information gathered can be very volatile,

requiring there own devices to store them, or expected to be

store for a longer life, depending on the needs of the business

application itself or as required by legislation. In this scenario,

cloud computing and the Big Data will have a prominent position

to absorb the need for ubiquitous information generation and

processing for human use.

A convergence in the research and conceptualization of the

Internet of Things will be decisive for it in the coming years. Global

integration will be possible through an interconnected world with

mechanisms that allow the exchange of information at costs that

support a smarter planet.

For further information: http://www.ipv6.br

http://www.iot-i.eu/public/news/inspiring-the-internet-of-things-a-comic-book

Page 81: Transformation and Change

81

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

BRING YOUR MOBILE DEVICESergio Varga

You may have heard of or read about BYOD (Bring Your Own

Device). If you haven’t, it refers to the massive increase in the

use of personal mobile devices in corporate environments, and

the need for corporations to deal with this equipment within their

business environment.

Mobile devices include devices such as Smartphones, cell phones,

PDAs, Tablets, among others.

Besides allowing the use of mobile devices, there is a need

to create or adjust existing applications to support these new

types of devices. Most business applications were designed

to be accessed through personal computers or fixed terminals.

Application development targeted to mobile devices requires

greater attention to security, data traffic volume, availability

and compatibility.

According to Cezar Taurion, Manager

of new technologies at IBM, this is a

phenomenon that companies cannot

ignore. Instead, they should deal with it

head on and define usage policies that

ensure these devices will not compromise

the business. In Cesar’s article published

on iMasters, he enumerates concerns

such as cost, technical support, security

and legal restrictions.

One way to tackle these concerns is the concept of Inverse-BYOD,

in which the company provides employees with mobile devices

instead of accepting personal devices, although this does not

solve all the issues raised by these devices. To make matters

worse, we have two other new technological trends that introduce

additional challenges: social business and cloud computing.

Considering these two new trends, application developers now

need to worry about the location of the data, additional security

measures and data sharing on social media. Besides that, they

must develop new applications that integrate all these technologies,

whether they are internal or external applications.

From the device management point of view, solutions exist that,

while still in their initial phases, consider the integration of those

three technologies.

We recognize that companies have several challenges related

to mobile devices, which are not yet fully resolved and there are

many other challenges yet to come! New mobile devices are

already being researched. For instance, the SixthSense project, in

which devices associated with the human body can interact with

the environment. There are many opportunities for applications

in the most diverse areas of business, such as e-commerce,

electronic media, and any other technology that enables people

interaction, especially in social business.

Another example of devices that companies will need to manage

and support, are the ones that can read

brainwaves and perform certain daily

tasks. While this may seem far-fetched,

in the academic world, especially in

Medicine, research projects are underway

as described by The Guardian. It’s just

of matter of time before they reach the

corporate world.

As we’ve discussed, mobile devices are

here to stay. It is up to companies, be they

technology consumers or providers, to manage those devices,

and develop new products and business models leveraging

them. It’s a fast moving space, considering that the massive

increase in mobile smartphones usage, propelled by the iPhone,

has happened in less than five years.

Who can tell what will happen in the next five?

For further information:https://ibm.biz/BdxvQT

https://ibm.biz/BdxvQw

https://ibm.biz/BdxvQQ

https://ibm.biz/BdxvQ9

Page 82: Transformation and Change

82

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE SKY IS THE LIMIT FOR INTELLIGENT AUTOMATIONMoacyr Mello

Machine Learning is a discipline of Artificial Intelligence that deals

with the identification of patterns that can be treated statistically.

On the other hand, Natural Language Processing, which became

popular after the success of Watson on Jeopardy!, is another

approach that, aided by linguistics, focuses on applying the

identification of patterns process over the written language of

several types of texts. In IT (Information Technology), we could adopt

these techniques, for instance to create software specifications,

to develop maintenance and data center support procedures or

to build a business proposal. In any case, we can use them in

document types which have rules and training standards. The

combination of these elements makes it possible to propose a

better mechanism for automation in IT.

Machine learning algorithms can infer the

results of complex systems of equations

which are difficult to formulate mathematically.

As the majority of IT activities are related to

the definition and description of the system

as well as writing code, why not use these

techniques to facilitate intelligent automation

within the development environment?

Some activities such as planning and

project estimation may be partly automated

already. One effect of this is to promote

standardization and accelerate software development.

The idea behind the patent “Effort Estimation Using Text Analysis”

is to use these features to estimate the implementation effort for

software specifications based on the use cases technique to

capture requirements. It is a statistical approach which considers

that automation, speed and the ability to quickly exploit scenarios

are more important than very precise estimations obtained by

other methods.

To implement such software we can use an artificial neural network

(ANN), which is a computational processing model inspired by

the nervous systems of living beings. This model is a network of

interconnected neurons to mimic the biological model. An important

ANN feature is its ability to acquire information, i.e. its ability to learn.

The network is “taught” to observe patterns in text based on well-

known examples and relate them to the cost of implementation.

This cost can be expressed in man-hours or a similar kind of

score. Then the network may infer values for subsequent cases.

The big problem is how to correctly identify such patterns that

appear in the text. This idea may span requirements specifications,

although it is easier when applied to a use case technique

perspective because this approach contains a small and simple

set of structural rules governing the composition of requirements

and related business rules.

Moreover, a weighted score may be assigned to each pattern based

on its frequency of occurrence. Similarity analysis and linguistic

analysis, as addressed in Karov or Hashimoto manuscripts, may

be used to determine such weighted scores [“Similarity-based

Word Sense Disambiguation”, Association for Computational

Linguistics, vol. 24, nº 1, pp. 20, 1998] and

[Dynamics of Internal and Global Structure

through Linguistic Interactions, MABS ‘98,

LNAI 1534, p. 124-139, 1998].

A domain dictionary is also used to

determine these weighted scores. This

dictionary is built during a learning process

based on a preliminary word set. The

purpose of this dictionary is to store the

structured knowledge acquired previously

over the domain and the type of system

that it represents. On the other hand, the

neural network will handle the unstructured

knowledge. It will be acquired along the training time and will

be stored inside the network.

While a person who reads a text and evaluates the effort based

on their own experience, forming an impression of complexity,

the ANN will also evaluate the score based on memory, volume,

difficulty of reading of common and uncommon words and also

based on words related to the application domain. These are

examples of variables which define the memorized attributes of

the neural network.

Software for supporting requirements specification or project

planning could take advantage of this kind of automation because

these activities almost always require some estimation effort.

For further information: TLC-BR Mini-Paper #091 (http://ibm.co/184qJ3S)

US-PTO Patent #US8311961 (http://1.usa.gov/12uVbOs)

http://en.wikipedia.org/wiki/Machine_learning

Page 83: Transformation and Change

83

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SECURITY INTELLIGENCE, A NEW WEAPON AGAINST CYBER CRIMEAlisson Lara Resende de Campos

In recent years, we have faced a different type of war, one not fought

with conventional, nor chemical, biological or nuclear weapons,

but with virtual weapons, the war referred to as “cyberwarfare”.

With the spread of the Internet, connecting everything and

everyone, today we have an unprecedented situation with respect

to access to organizations’ sensitive data or governmental top

secret information around the world. This is no longer a matter

of hackers competing among themselves to see who could

first hack into a particular web server. It is now an orchestrated

activity performed by big corporations or governments, with the

purpose of industrial espionage or to get information related to

weapons of mass destruction.

In this cyber warfare, new system exploitation techniques, known

as APT — Advanced Persistent Threat, are used. These techniques

make use of different types of malicious computer code like

worms, viruses and rootkits, or exploitation techniques such as

phishing and social engineering, in an orchestrated way. One of

the most famous episodes was the “Stuxnet” worm, designed

to attack industrial facilities, like those for uranium enrichment in

Iran, whose centrifuges were compromised. And that was just a

single incident, among others that took place or still may occur.

For this reason, traditional antivirus solutions and firewalls are

not enough anymore to protect organizations, thus creating a

need of a sophisticated set of countermeasures to deal with

this type of threat.

One of the main weapons used against this pungent threat is

known as “Security Intelligence”, which originates in SIEM’s

solutions (Security Information and Event Management) that were

created to collect and correlate events in technology, but have

had to evolve in order to meet the new reality in which we live.

Security Intelligence tools are designed to parse, normalize and

correlate huge volumes of data from applications, operating

systems, security tools, network streams and others. They analyze

critical infrastructure traffic and learn the expected behavior in

order to detect anomalies. That way, threats can be discovered

even before vaccines or systemic fixes are available. Therefore,

it is possible to proactively identify threats and illegal actions

right at the moment they occur or even before.

As I write this article, a new trend is emerging: the integration

of Big Data and Security Intelligence solutions. The information

exchange between these solutions will allow the improvement of

predictive analysis and the prediction of risks related to companies

and governments. This task was almost impossible to be done

until now, due to the high volume of unstructured data, such as

e-mail, instant messaging and social networks.

The data analysis, including behavioral and sentiment, along

with the ability to correlate a high volume of data and IT

tools’ interoperability are the ‘good guys’ response to fight

the emerging cyber threats, with its new fault exploration

techniques, espionage, fraud and theft of sensitive information

from corporations and governmental entities. The bad guys

aren’t sleeping, and nor should you!

For further information: http://en.wikipedia.org/wiki/Advanced_persistent_threat

http://www-03.ibm.com/security/solution/intelligence-big-data/

http://blog.q1labs.com/2011/07/28/defining-security-intelligence/

Page 84: Transformation and Change

84

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

TECHNOLOGY TRANSFORMING SMART CITIESDan Lopes Carvalho

One of the issues that has grabbed much attention from public

administration is how to provide quality services and infrastructure

to meet the needs of a modern urban system. This requires

dynamism and flexibility for an urban population, even more so

in the case of Brazil due to uneven distributions. This leads to

the Smart Cities concept becoming more important.

A recent study, conducted by a group of European Union universities,

formulated the first academic definition of Smart Cities: “A city can be

defined as ‘smart’ when investments in human and social capital and

traditional (transport) and modern (Information and Communication

Technology — ICT) communication infrastructure fuel sustainable

economic development and high quality

of life, with a wise management of natural

resources, through participatory action

and engagement.” From that definition,

we can see that a smart city has several

factors such as human development, the

environment, transportation, security, the

economy, social networks and others.

In their strategic goal of building a

modern urban system that can adapt

to constant changes, the government

faces a variety of challenges ranging

from a precarious infrastructure to an

immense flow of information to be

managed. Technology can be used to overcome these challenges

and transform cities into intelligent urban systems.

A city transformation based on information technology has three

main pillars: instrumentation, interconnection and intelligence.

Instrumentation is the ability to capture city information into an

infrastructure. In other words, deploying strategically located

sensors that monitor and capture changes of behavior or

environmental anomalies, such as people movement and

agglomeration.

Interconnection is the ability that the city government system

has to transmit and receive several types of data, interact with

other ecosystem events and predict events. An example would

be to relate traffic events that can generate risks or impacts to

another urban system, such as public security.

Intelligence is the ability of the system to understand and generate

fast and automated responses to improve public services as a

whole in an integrated way. The most efficient method for measuring

the intelligence of a city is its capacity to interact with citizens

and generate quick and efficient system changes.

The vision of an urban system that is

integrated and has the capacity to

create synergies among the various

resources causes a change in the

current city-management model. The

new model must include a strategy

that is shared between many public

services such as safety, transportation,

mobility, energy and water. It must have

integrated information processing, and

be embedded in an environment that

is inter-city, inter-state, national and

even international.

The cities then start a long and

continuous walk to meet this new urban

management concept, where the governance should be shared

with an integrated vision to provide effective and fast responses

and more effective public policy planning, that in turn provides

citizens with a better quality of life.

In this context, technology transformation is the foundation for

this new model of Smart Cities.

For further information: http://www.smartcitiesineurope.com

http://www.ibm.com/smarterplanet/br/cities

Page 85: Transformation and Change

85

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

CROWDSOURCING: THE POWER OF THE CROWDCarlos Eduardo dos Santos Oliveira

“Alone we are strong, together we are stronger.” You’ve probably

heard this phrase many times. Welcome to the world of

crowdsourcing. The term was first mentioned by journalist Jeff

Howe and published in his article “The Rise of Crowdsourcing,”

Wired magazine, in June 2006.

The concept of bringing together people with different skills to

achieve a common goal is older than the term created by Howe. A

good example and perhaps the oldest and most famous in Brazil is

the Free Software Foundation (FSF), which has among its objectives,

the dissemination of open source culture. This movement was

born in 1985, led by Richard Stallman,

with a very simple proposition: a cycle

where each one adds his knowledge

to the group, developing software with

open source. In practice, if you use a free

software, you can improve and return it

to the group and be reused by others,

restarting the cycle.

In recent years Crowdsourcing has

gained strength mainly in IT environments,

when passed from anonymity to fame

due to the emergence of Crowdfunding

derivations and Crowdtesting.

The basic concept of Crowdfunding is

the collection of funds through small

contributions from many parties in order to finance a particular

project or venture. In Brazil, Catarse is the main activist of

Crowdfunding. It enables anyone financially to support any

project catalogued to this site.

Recently the Transparency Hacker community raised sufficient

funds, with Catarse support, for the Bus Hacker project, which

endorsed a bus acquisition, renovation and modernization, in

order to spreading the “hacker” culture throughout the country.

Now that, Crowdtesting uses the concept of crowd applied for

tests of any nature or specific area of knowledge. This concept

is widely used by large technology companies like Microsoft and

Google, through the beta application testing programs.

At IT, there are unlimited tests scenarios, that can be performed

by a crowd and which could not be simulated with only a single-

company effort. However, there are some considerations about

information security and privacy, especially when dealing with

innovation or some strategic positioning object.

Given the power, range, scenarios and benefits that the

Crowdtesting provides, many companies have adopted this

concept and reduced product test cycle time. Another advantage

is the enormous diversity of operating systems, devices and

settings that this test model may cover, something very difficult to

achieve within a corporate environment.

The Linux Community users are an

example of Crowdtesting, where

each user gets their copy, installs and

reports bugs to the vendor or group of

developers for further correction.

In this model, there are two possibilities

of investment returns: financial or

reputation. I recommend reading the

Mini Paper (pg. 71) from Wilson E. Cruz,

which reputation is discussed further.

If the goal is financial return, there are

already some test outsourcing services,

such as the Crowdtesting, who hires

specialized manpower for regular work.

In the recent last years, Global companies joined Crowdsourcing,

creating own programs, seeking for some competitive advantage

or innovation. Among these are giants such as Pepsico, P&G,

Ford, Dell, Starbucks, Fiat and others. Crowdsourcing grows fast,

gaining space in the media and at important corporations, with

the support of these companies.

For further information: http://crowdsourcing.typepad.com

Crowd Testing – Applicability and Benefits

http://blog.ideiasnamesa.com.br/tag/crowdsourcing/

Page 86: Transformation and Change

86

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

TOGAF – WHAT IS IT AND WHY? Roger Faleiro Torres

TOGAF (The Open Group Architecture Framework) is a conceptual

model of Enterprise Architecture designed in 1995 by The

Open Group Architecture Forum, whose goal is to provide a

comprehensive approach for the design, planning, implementation

and governance of architectures, thus establishing a common

language of communication between architects.

TOGAF is currently at version 9.1, which was published in December

2011. It is based on an iterative process, and uses reusable,

cyclical best practices to model the core or main activities of an

organization. It is made up of, the four types of architecture that

are commonly accepted as subsets of an Enterprise Architecture,

namely: business, data, applications and technology.

The content of TOGAF is presented in seven parts:

1. Introduction, which includes basic concepts about Enterprise Architecture, TOGAF, terminology and expressions adopted;

2. The method for developing architectures (ADM - Architecture Development Method);

3. ADM Guidelines and Techniques;

4. Architecture Content Framework;

5. Enterprise Continuum & Tools;

6. TOGAF Reference Models;

7. Architecture Capability Framework.

In summary, the Architecture Development Model (ADM) is a

method for the development and maintenance of Enterprise

Architectures. The Architecture Capability Framework (ACM)

specifies the actors and roles who operate the ADM, the and

the techniques, guidelines and best practices to store content

in a repository in the ACM. The content in the ACM is organized

according to the Enterprise Continuum. The repository is initially

populated with Reference Models, such as the TRM (Technical

Reference Model) and III-RM (Integrated Information Infrastructure

Reference Model), which are part of TOGAF.

The ADM, illustrated in the figure, is considered to be the main

component of TOGAF, comprising several components that interact

with each other, through the fields of architecture, to ensure that

all business requirements are properly met. An advantage for

the adoption of ADM is that it can be adapted to the terminology

adopted by the company.

Why Enterprise Architecture and TOGAF should be considered

strategic objectives by companies? Enterprise architecture helps

to identify gaps between the current state and the desired state

for the company, providing a plan for the Organization to reach its

goals, describing it in multiple levels of breadth and depth. TOGAF,

in turn, accelerates the development cycle of this architecture,

providing answers to the questions such as what, who, when,

how and why.

For further information: http://www.opengroup.org/togaf/

http://pt.wikipedia.org/wiki/TOGAF

Enterprise Continuum:

http://pubs.opengroup.org/architecture/togaf9-doc/arch/chap39.html

Page 87: Transformation and Change

87

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

REVEAL THE CLIENT THAT IS BEHIND THE DATAMônica Szwarcwald Tyszler

We are overloaded daily by information arriving from several sides,

by e-mail, social networking, press, and billboards. According to

an IDC study, around 15 Petabytes of data are generated every

day. This data flow is expected to reach 8 Zettabytes until 2015.

This giant amount of information makes it difficult to select what’s

relevant. This is also an issue for companies that attempt to get

to know its customers profile, to provide customized products

and services according to their needs.

The first step for modeling consumers’ behavior is to understand

the level of existing information about them in the company and

use it in the right way. It is also essential to know how to integrate

the business to the individual characteristics of the customers.

The raw data lead to a limited vision of who the customer is and

what he or she wants.

For an effective result, companies should seek to consolidate a

customer 360 vision into a single point. Tools for data collection,

management and analysis emerged in order to better understand

customer wishes. Customer analytics, a term in vogue among

big companies, is the synthesis of the effort that is getting to

know consumer behaviors and knowing how to create models

to strengthen that relationship.

Customer information in systems such as ERP, CRM, records,

and data obtained from external sources such as marketing

agencies or market research companies, must be consolidated

and analyzed in order to translate their behavior into numbers. The

process of obtaining such data is gradual and evolutionary, and

leads to a constant learning about what information is valuable

and what should the next steps be.

A considerable part of this data universe needs to be transformed

before it is used for an effective analysis. The science behind this

analysis is in the application of statistical, mathematical or even

econometric concepts, as inferences, correlations, linear and

logistic regressions, to reveal previously hidden information. The

consumer profile study is possible thanks to the application of

scientific methods that enable customer segmentation, modeling

offers and custom loyalty programs.

Better forecasting and smarter decisions enable retailers, banks,

and insurance brokers, for instance, to generate larger sale volumes

by creating real-time promotions and offers.

Following this trend, technology provider companies are ready to

offer these services to all sectors of the economy, including not

only products, but also specialists able to apply data analysis

in various industries, in the most varied scenarios.

Data analysis technologies allow us to identify customers when

they get into a physical store or visit an online store, associating

they to their consumption history, habits, preferences and

socioeconomic status. That will enable the creation of offers and

proposals for products and services that adhere to the customer

needs, providing a unique and personalized interaction experience.

Customer analytics is the way for companies to explore a new

competitive frontier.

For further information: http://www.wharton.upenn.edu/wcai/

http://www.customeranalyticsevent.com/

http://www-01.ibm.com/software/analytics/rte/an/customer-analytics/

Page 88: Transformation and Change

88

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SINGULARITY: ARE YOU READY TO LIVE FOREVER?César Nascimento

Stop to imagine how your life would be like if the humanity was

an immortal race. Or rather, think of the impact if the humanity

becomes immortal. The consequences on our political and

economic systems would be, without a doubt, large and

deep. Leaving the human aspects aside, there are, in fact, real

possibilities supported by great scientists that mankind will reach

immortality soon.

According to Raymond Kurzweil[1], an Artificial Intelligence

researcher, it is possible that

immortality can happen still in

this century[2]. The immortality

of the human race is part of

several predictions that Kurzweil

made and called Singularity –

a profound transformation in

human capabilities – which

according to the researcher

should happen in 2045.[3]

Kurzweil’s predictions are

based on mathematical models

that propose scientific and

technological developments

in exponential scale. To give

an idea, his models show that

the last two decades of the

twentieth century were equivalent to any progress of eighty years

earlier. We will make more twenty-year progress in just 14 years (in

2014) and then do the same again in just seven years. Inside the

exponential scale proposed by Kurzweil, the first 14 years of the

21st century would be superior to the scientific progress achieved

throughout the previous century. To express it in another way, we

do not have a hundred years of technological advancement in

the 21st century, but a breakthrough about 1,000 times higher

than what was achieved in the 20th century[3].

According to Kurzweil, immortality can be achieved by means

of two combined factors: GNR (Genetics, Nanotechnology, and

Robotics) and exponential computational progress predicted

by Moore’s law[4].

The GNR will contribute to improving the quality of life of human

beings, increasing life expectancy in many years. The combination

of robotics and nanotechnology will help us create effective,

targeted treatments that are less invasive because it will be possible

to program nanobots for the eradication of any disease. Imagine

some examples: nanobots into a person’s bloodstream that remove

the excess of fat or sugar, which can make corrections on the

cornea, and eliminate viruses, bacteria or parasites.

The robotics and the exponential evolution of computing provide

the second part of the equation of immortality.

It takes about 10 quadrillion (1016) calculations per second

(cps) to provide a functional

equivalent to the human brain.

It is estimated that in 2020, this

computational capacity will

cost about $ 1,000 and that

in 2030, those same thousand

dollars in processing power

will be about a thousand

times more powerful than a

human brain. Today, there

are mathematical models and

simulations of a dozen regions

of the brain. According to

current research, it is possible

to simulate about 10,000

cortical neurons[5], including

tens of millions of connections.

This means that if we have the means of hardware, software

and control over our bodies, we can literally make a replica

of our brains. In this way, it all boils down to a phrase used by

Kurzweil himself: “live long enough to live forever”[6]. Live long

enough to take advantage of the improvements that genetics

and nanotechnology will bring, so you can live even longer,

perhaps until the tipping point from which you will be able to

live indefinitely.

For further information: [1] https://ibm.biz/Bdx3Ar

[2] https://ibm.biz/Bdx3AY

[3] https://ibm.biz/Bdx3AZ

[4] https://ibm.biz/Bdx3Aw

[5] https://ibm.biz/Bdx3uT

[6] https://ibm.biz/Bdx3ub

Page 89: Transformation and Change

89

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

NOW I CAN TWEETSergio Varga

The speed with which new

services and products

appear on the Internet and

spread is impressive. In the

case of Facebook, Twitter

and YouTube, tools initially

designed for the sharing of

information among people

have become social media

standards. On the other

hand, many companies

prohibit access to these tools in the business, under the pretext

of not being related to business activities, being then considered

a distraction for employees.

What was considered forbidden, has been encouraged by the

same companies. But, what led to this change of position? Several

reasons can be cited, among which are relationships, publicity,

opinion forming, the frantic need for information and the speed

at which it reaches the consumer.

Companies saw in these social media new opportunities to reach

customers — innovative marketing channels. For a long time, it has

been observed that word-of-mouth advertising is one of the best

ways to get new consumers and recent research proves it. Social

media simply made it possible to increase exponentially this kind

of publicity. Social relationships also have no borders anymore.

After joining a community or associating with a friend, opinions

disclosed on the Internet now are “listened to” instantly and with

greater breadth, because your friend’s friends also see your opinion.

The change of position of the companies is also linked to the

perception of opinion formers, reference people and celebrities

on social networks. Their power or attractions over other people

is very high, being great promoters or “detonators” of products

or services.

In addition, we have the incessant search for information. We are

moved by knowledge and curiosity. It can be from the simplest

or banal information to the most important or high priority. And

this search has also been supplied by social media.

Another point is the speed that the information reaches the

consumer. By the time you read this article, you may be receiving

an instant promotion on Twitter, informing you that a TV is being

sold by a large retail store with special discounts. This was not

possible a few years ago and now companies are increasingly

using social business.

The use of social media by companies for product disclosure

and by employees for dissemination of experiences, exchange

of views and information, has been a subject of concern for

businesses. This same concern occurred in the past with the

advent of e-mail in the 80s.

The most important point is to define the criteria of how to behave

towards the outside world, namely, whatever an employee postes in

social media has to comply with some guidelines of the employer,

because he is representing it at that moment. Another point is

the creation of initiatives such as the use of blogs to comment on

products, tweets about events, a company’s Facebook page and

promotional videos on YouTube. These initiatives allow employees

to participate and disseminate such content in social media.

Companies also request their employees’ assistance to respond

to comments or questions of consumers related to products

or services. The social media, which are being constantly

monitored, are important channels where the consumer can

get to the companies.

The greatest difficulty for the employee is to reconcile the

participation in personal social media with its use in business,

namely, can an employee use the Twitter and Facebook to talk

about the company’s business and personal affairs? Or should

he use Facebook for personal and Twitter for professional? There

is no rule and the employee must decide.

Well, one thing is clear, now I can tweet without my boss being

cross, can´t I? Let’s tweet then?

For further information: http://bit.ly/15wuLf7

http://bit.ly/17t1jLS

Page 90: Transformation and Change

90

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE NEW CONSUMERRicardo Kubo

My kids no longer watch television the same way I did. Today

they see what they want to, when and where they need, using

the Internet. I have always looked for personalized services and I

ask myself, what kind of consumer would they be in this context?

This new client expects services from a retailer in a unique way

and without commitment to fidelity. Recently, I lived a memorable

experience at a store located in the interior of São Paulo, where

the purchasing process was the same as in the old days using

a credit notebook. There, I received a personalized attention,

from the shop attendant warm greeting to the credit payment

without any bureaucracy. However, upon returning to São Paulo,

I began to give more value to convenience and other important

attributes as speed of purchase and delivery, which turns my

preference for online shopping.

In a previous experience, working on a start-up Internet business,

it was possible to operate with a healthy bank account, even

with the NASDAQ bubble burst in 2001. A traditional company

sustained the operations joining the virtual world, forming what,

in the jargon of that time, it was known as Bricks and Clicks.

Currently, the competitiveness in electronic commerce erodes

values such as services and warmness, offered in traditional

stores. The recovery of these values, taking advantage of the

synergies with the digital world, is one of the biggest challenges

faced by major retailers.

In 2011, TESCO, the world’s third-largest retailer, leveraged online

shopping from the use of virtual gondolas in subways at Korea.

Anthon Berg was successful by opening its stores exploring the

engagement of its consumers via social media. This synergy of

real and virtual world can be an alternative to be explored, in

order to compensate the e-commerce low-profit margins. It also

applies for the industry, which is already beginning to create

joint initiatives and interdependent in the real and virtual world.

Many brands invest in concept stores to generate a customer

shopping experience exploring vision, smell and even emotions,

with the purpose of retaining the consumer. It takes into account

also the factors related to the difference of generations and their

buying propensities.

With the increase in scale, the personalized service requires

technology solutions to improve the consumer experience. These

solutions can help identify, interact and customize the service

to new customers.

To provide relevant information to these solutions, technologies

such as biometric recognition, e-commerce platforms, digital

campaigns (leveraging social media or not) and back-office

systems, are all great data collectors that can be analyzed to

understand the individual’s behavior related to the different aspects

that a brand may offer.

In this context a new variable rises, the cognitive computing, that

adds new capabilities in the digestion of this explosion of data.

Finally, there is also the impact of mobility using devices like

smartphones leaving valuable digital traces as customer location

in real time. This resource gives even more power to the consumer,

who can physically visit a store while compare other retailers price.

It gives him power to negotiate locally or make a purchase at a

remote competitor from his smartphone. This scenario generates

direct impacts on the business model, pricing, promotions and

service levels, which often differ between digital channels, physical

stores or call centers.

In fact, the consumer is omnipresent and future generations will

be increasingly instrumented, informed and short-termed. If we

compare how our parents shopped in the past and how we do

today, we notice many new habits have being adopted in such

a short time. Who’s ready to service this new consumer?

For further information: http://en.wikipedia.org/wiki/Bricks_and_clicks

https://www.youtube.com/watch?v=nJVoYsBym88

https://www.youtube.com/watch?v=_cNfX3tJonw

Page 91: Transformation and Change

91

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

TRANSFORMING RISKS INTO BUSINESS OPPORTUNITIESAlfredo C. Saad

The concept of risk has emerged in the transition

from the middle ages to the modern, throughout

the 16th and 17th centuries. Until then, despite

the remarkable progress already achieved in

other areas of human knowledge, no one dared

to challenge the design of the gods, which

seemed to determine future events. This fact

made the observed events like they were merely

associated with good or bad luck. A rooted

fatalistic vision prevented even to be imagined the possibility

of actions that increase the likelihood of favorable events or

diminish the likelihood of adverse events.

The innovative air brought by the Renaissance made the thinkers

of the time challenged this fear of the future, causing them to

develop and improve quantitative methods which anticipated

varied future scenarios as opposed to a single scenario imposed

by fate. One of the first milestones was the solution, by Pascal

and Fermat, in 1654, about the puzzle of bids division in a gamble.

Then arose the first foundations of probability theory, basic to

the concept of risk.

From there, the newly created perspective did emerge throughout

the 18th century, numerous applications in different areas, such

as life expectancy calculations of populations and even improved

the calculation of the insurance for the sea voyages.

The permanent evolution of quantitative methods brought these

applications to the corporate world, yet in the Contemporary

Age. Texts written by Knight in 1921 (Risk, Uncertainty and Profit)

and Kolmogorov in 1933 (Foundations of Probability Theory), as

well as Game Theory, developed by von Neumann in 1926, are

bases for the contemporary evolution of the theme. Among the

areas addressed since then, can be cited decisions on merger

and acquisition of companies, investment decisions and macro-

economic studies.

The evolution of the discipline of risk management has identified

four different ways to react to risk, namely: accept, mitigate, transfer,

or avoid the risk. There is, however, a fifth form, innovative, of

reaction: to transform the risk in a business opportunity.

An example application of this concept can be

seen in outsourcing contracts of IT services.

Typically, the client hires the service provider to

operate the IT environment of your organization

with preset quality levels in a contract, which

guarantees that any failures would not

significantly impact the customer’s business.

In this scenario, it is the relevant part of the

service provider’s activity the continued effort of the identification

and treatment of vulnerabilities in the client IT environment and

that may affect his activities.

It is known that growing on the client the perception that the

provider acts proactively in identifying potential risk factors

to its business significantly increases their willingness to hire

new services.

Moreover, the treatment indicated for vulnerabilities identified,

often requires taking actions that are outside the scope of

contracted services.

This scenario features the fifth way to react to a risk identified: the

generation of a new business opportunity, which can be made

possible by the expansion of the scope of the contracted services,

with the purpose of eliminating or at least mitigate factors that

put the customer’s business at risk.

The permanent exercise of this proactive behavior of the provider

consolidates, in the customer’s perception, the idea that the

provider is able to generate a relevant aggregate value, which

is to ensure that their own business are protected by an effective

IT risk management. Such added value widely extrapolate the

commercial boundaries beyond the strict contract, creating bonds

of mutual trust valuable to both parties, and that can generate

actions of a partnership in unexplored areas not previously seen.

For further information: Six keys to effective reputational and IT Risk Management

The convergence of reputational risk and IT outsourcing

Bernstein, Peter L. – Against the Gods: The Remarkable Story of Risk, John Wiley & Sons Inc, 1996

Page 92: Transformation and Change

92

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

QOS IN BROADBAND ACCESS NETWORKSMariana Piquet Dias

The demand for broadband has increased significantly due to

a variety of applications that are carried over the Internet, such

as television (IPTV), voice (VoIP), video on demand (VoD), video

conferencing and interactive games. Thousands of users in those

applications compete for the same resources of broadband access

network, which can degrade the performance of contracted

services. Who has never had the experience of a video interrupted

by network slowdowns or excessive noises to make VoIP calls?

For this reason the broadband providers need to ensure adequate

levels of Quality of Service (QoS) on the network to meet the

requirements of users and their applications. A proper QoS policy

will classify and prioritize traffic according to your requirements.

This scenario brings a great challenge for the telecom companies,

because the QoS policy needs to be deployed from end to

end on complex networks that use multiple broadband access

technologies like ADSL (using the telephony network), DOCSIS

(cable TV use) and GPON (fiber optic network), in addition to the

mobile technologies Wi-Fi and 3G/4G.

To create this policy is required a good understanding about

the main QoS parameters: network availability, bandwidth,

latency, and jitter.

The availability has its importance because network outages,

even of short duration, can compromise application performance.

Bandwidth is another important parameter that affects the QoS

planning. Many networks operate without bandwidth control,

allowing certain applications overuse the middle and compromise

the provision of bandwidth for other services.

The latency, or delay of the network, it is the time that a data packet

takes to go between the source and the destination. The jitter

is the variation of this delay. When the latency or jitter are very

large, several damage can be caused to real-time applications

like voice and video.

In developing a QoS policy, these parameters must be planned

from end to end on the network by analyzing all the way from

the user to the service provider. It is also necessary to meet

the requirements of each application and users. However,

the dynamism of the market shows that these requirements

have changed rapidly over time. Therefore it is necessary

to implement monitoring solutions and network analysis to

identify changes in the behavior of the traffic in order to adjust

the QoS planning.

Some features and functions of these solutions are important

in managing the user experience, such as real-time traffic

graphs, support for traffic shaping or speed limitation, site

blocking and content filtering. These solutions allow you to

view and analyze traffic, supporting the operator in setting up

more effective QoS policies.

With this cycle of monitoring and planning it is possible to

have an effective plan that enables QoS broadband networks

to support current and future services. This brings a great

opportunity for solutions that include monitoring services and

analytical tools that will lead carriers to invest in more efficient

networks and broadband access better quality.

For further information: http://tinyurl.com/lmmfy6d

http://en.wikipedia.org/wiki/Quality_of_service

http://en.wikipedia.org/wiki/Network_traffic_measurement

https://ibm.biz/BdDGFH

Page 93: Transformation and Change

93

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

DO MACHINES FEEL?Samir Nassif Palma

Some futuristic or science fiction films show us machines that

learn and assume command of the world Other films show us

robots that have feelings, and consider themselves humans

and want to become one of us. After all, is it possible that

machines have feelings? Can make decisions? Our grandfathers

surely would answer no, but current reality presents us with

something different.

Similarly to people, machines have built-in functional systems

responsible for the execution of their actions and tasks. However,

these systems are created, coded, tested and implemented by

people. The functions of these systems, is defined according

to a purpose and aims to meet the ultimate goal for which the

machine has been designed.

We’ve seen some machines achieving goals that were before

unthinkable, how to beat a chess champion, or even win a contest

of questions and answers. In addition, there are machines who

design the weather report, discover oil in the seabed and map

the best route between two addresses. These are machines

with specialized internal systems that analyze data and make

decisions. Therefore, we have a response to the questions posed

at the beginning of this article. What about feelings?

Along with the verb “feel” there comes experience, perception,

emotion and value judgment. Feeling something is good or bad

can be translated into positive or negative. There is also indifference,

i.e. the neutral value. This is also the form used for structuring

the approach to feelings in machines. Given the scenario and

conditions submitted, there is the possibility for determining a

reply: that is positive, negative or neutral.

What would be the point of having machines and systems dealing

with feelings? One answer would be to attempt to model human

behavior to predict a person’s next move.

Companies show interest in analysis of feelings, in order to take

more assertive actions that would increase sales or avoid the

loss of customers. Research revolves around reputation and

behavior of customers during launch or consumption of products

and services. For example, how to evaluate the perception or

feeling of a target audience on ad and marketing campaigns

and what are the returns generated.

An alternative that is being used currently is the interpretation of

comments on social networks or on websites through the use

of text mining techniques. However, the analysis of feelings in

texts is an extremely challenging task. Expressions with slang,

language flaws, hidden objects, abbreviations and obscure

context are examples of some of the difficulties. The good news

is that a lot of this is already possible. The cognitive process is

similar to that used in the education of children. It requires a

lot of guidance, method (structure and process) and practice

(training and experience). In this way the machine can learn to

collect, interpret and even feel what is hidden.

In the translation of human language to machine language, the

lexical analysis to determine feeling is totally geared to the context

and the object that you want to evaluate. The technique of analysis

of a product, for example, is different from the examination of

a person (an artist or a politician in an election campaign). The

analysis of sports teams is different than analyzing the image or

reputation of organizations.

The interpretation of text is only a use case for the analysis of

feelings. There are other techniques and models, such as the

combination of events, which are also used.

Anyway, returning to the question in the title, we can affirm that

machines feel. Just teach and train your internal systems. But

you can rest assured, there is still nothing that can lead us to a

catastrophic end like some sci-fi films.

For further information: IBM Analytics Conversations

Techniques and Applications for Sentiment Analysis - ACM

Creating a Sentiment Analysis Model – Google Developers

Introduction to Sentiment Analysis - LCT

Page 94: Transformation and Change

94

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

UNDERSTANDING AT AND ITMarcelo de França Costa

Automation can be defined as the use of machines and control

systems to optimize the production of goods and delivery of

services. This set of hardware and software, called Automation

Technology (AT) is applied with goals that include increasing

productivity and quality, reducing the number of failures and

issuance of waste, achieving economies of scale and improving

safety conditions. AT is the step beyond mechanization, which

decreases the need for human intervention in the process.

One example of the use of AT is

called Smart Grids, smart power

grids that seek to improve the

distribution of energy through

power quality and consumption

in real time. They are also known as

smart meters. Thus, the customer’s

residence is able to “talk” to the

distribution company, warning

about a power problem before

the customer can even pick up

the phone to complain.

Taking a look into the Smart Grid, we find that it is a solution that

makes use of Information Technology (IT) and telecommunications

as an information source and takes automatic actions, according

to the behavior of suppliers and consumers.

When analyzing AT in the corporate world, in a more strategic

context, given its proximity to IT technology, it would be natural to

think of it both as process definition and objective of IT governance.

This portion of corporate governance is responsible for coordinating

technology departments by aligning their processes to ensure

that they support the corporate strategy and contribute to the

organization’s effort to achieve its business goals. It is expected

that IT governance, will enable the achievement of benefits such as

alignment to best practice and international standards, facilitation

of audits, simplification of management and transparency in

operations areas, in addition to streamlining investments to allow

a clearer view of the expected return.

The proposal is that IT governance is extended to the AT area,

so that automation engineers, for example, do not perform their

work unaware of the global context of the company, but within a

philosophy of the AT area, aligned to corporate planning. A good

way to do this would be to build based on some of the models

models of famous standards such as CMMI, COBIT, ITIL and ISO.

The integration between automation systems with process control

(AT) and enterprise systems (IT) is an ancient requirement. One of

the most cited reference models in the AT area is ISA-95 (see figure),

an international standard created by ISA (International Society of

Automation). This standard is used to determine what information

should be exchanged between

production systems(maintenance

and quality) with the back office

(purchasing, finance and logistics).

Enterprise systems such as ERP

(Enterprise Resource Planning)

usually are not designed to interface

directly with systems on the “shop-

floor”. Acting as intermediaries

between these two worlds are PIMS

(Process Information Management

System) and MES (Manufacturing

Execution Systems) at the level 3

layer of ISA systems. These systems control production, collect

data from the manufacturing plant through level 2 subsystems

such as SCADA (Supervisory Control and Data Acquisition),

organizing, storing and delivering them to the level 4 applications,

responsible for production planning.

In order to pursue the alignment between IT and AT and their

integration, special attention should be given to data communication

networks, which should be segregated and protected. Security

breaches in networks and AT systems, especially those that

control industrial facilities such as power plants, boilers and

nuclear reactors, can result not only in financial losses but a

disaster of major proportion.

When the synergy between AT and IT prevails, the company is

the real winner. AT has considerable information to offer to IT,

as well as IT has a lot of learning and good practices that can

contribute to AT projects.

For further information: http://www.isa-95.com/

http://www.isa.org

Page 95: Transformation and Change

95

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

“GRAPHENE’S VALLEY” AND TECHNOLOGY REVOLUTIONCarlos Alberto Alves Matias

Graphene is a flat layer of carbon’s

atoms from the graphite and diamond

family, with a hexagonal pattern and

several very interesting properties:

resistant, lightweight, flexible, and

almost transparent. As an electrical

conductor graphene may replace

silicon in the production of certain

electronic equipment, making the

equipment more efficient, compact

and quicker. Applications of Graphene

seem endless: nanotechnology,

faster access to the Internet, more durable and rechargeable

batteries, more efficient water filters, more resistant cements,

more economical and less polluting engines, all with a low cost

and using raw material.

Graphene was discovered in the 1930’s and had little attention

until the Russian scientists Konstantin Novoselov and Andre

Geim isolated the material at room temperature, earning the

Nobel Prize in Physics in 2010. Given the amazing properties of

Graphene, laboratories around the world are investing in research,

so scientists can develop new and important applications.

The European Commission will invest one billion Euros to support

pioneering projects in the next decade. The U.S. and other

countries are doing the same. Built in an area of 6,500 m2 at

Mackenzie University in São Paulo, Mackgrafe Research Center

will have an approximate investment of R$ 30 million and is

expected to be inaugurated in May 2014.

Currently, 1 Kg of graphite costs $1 and it can extract 150g of

Graphene, valued at $15,000! Graphene’s market has a potential

up to $1 trillion in 10 years. It is estimated that Brazil has the

largest world reserve, according to report published in 2012 by

the DNPM (National Department of Mineral Production).

Graphene is already used to manufacture electrodes in batteries,

tactile screens, digital electronic devices and compounds for

the aeronautical industry. However, experts say the best is

yet to come.

A new type of data transmission cable may increase Internet

speed. According to research published by the journal Nature

Communication, the idea is to take

advantage of all the speed achieved

by electrons in Graphene. On the

other hand, scientists at Berkeley

University think the secret of high

speed isn’t the cables, but in network

equipment modulators – responsible

for managing the transmission of

Internet data packets.

Purifying salt water, by turning it into

fresh waterat a low cost, could help

dry areas, like northeast Brazil. The

process, created by researchers at Massachusetts Institute of

Technology (MIT) is to pass sea water through an extremely thin

Graphene filter, which collects the impurities and eliminates

radioactive materials, such as the contamination that occurred

recently in Fukushima.

At the University of California, a student found that when

submitting a single layer of Graphene to an electrical charge

for two seconds, a LED remained on for five minutes. Engineers

at Stanford University replaced the carbon in a new battery with

graphene. The battery recharge was completed in a few minutes,

about a thousand times faster.

Graphene has 200 times more electron mobility than silicon, this

will allow for the production of more powerful processors with

up to 300 GHz frequency. And graphene monoxide has the

versatility of being an insulator, conductor, and semiconductor

which can be very useful in nanochips.

Have you already imagined a mobile phone in the form of a

bracelet? Yes, this may be possible due to the flexibility of

Graphene. Many companies have already registered several

patents related to that and there are revolutionary researches

promising eve more advances

The future deserves Graphene and it will change our lives!

For further information: https://ibm.biz/BdDNb4

https://ibm.biz/BdDNbs

https://ibm.biz/BdDNbi

Page 96: Transformation and Change

96

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

THE TIME DOESN’T STOP, BUT IT CAN BE BEST ENJOYED…Antônio Gaspar, Hélvio Homem, José C. Milano, José Reguera, Kiran Mantripragada,

Marcelo Sávio, Sergio Varga and Wilson E. Cruz.

For many people with more than 35 years old, it feels like Ayrton

Senna “died yesterday”. It feels like you could close your eyes

and live again the emotion of turning the TV on a Sunday morning

and see him winning once again. Ayrton died on May 1st, 1994.

The surprising thing is that the World Wide Web and its browsers

really only started to gain popularity in 1995. But only a few

people are able to clearly remember what life was like without

the Internet. What is the cause of this strange paradox?

Daniel Kahneman, Israeli psychologist, introduced insights of

his specialty area in economic science, especially with regard

to evaluation and decision-making, under uncertainty. He stated

that there are be two systems in our brains. The first one is a

slow system which involves attention and focus. We use it in the

activities in which we are aware and have control. The another

one is extremely fast, independent and uncontrollable. It is lousy

at statistics but great at generating quick decisions by comparison.

In 2002, Kahneman received the Bank of Sweden prize in economic

sciences in memory of Alfred Nobel (commonly and erroneously

called the Nobel Prize of Economics) for this work. So the question

to ponder is “does our sense of time have anything to do with those

systems described by the acclaimed scientist?”.

Apparently yes, and there are also signs that this perception

varies from person to person. The biological clock iswould a

personal measure of this sort and it depends on a referential

perception; that is, two people in the same place and performing

the same activities may have the perception that time had gone

faster for one than for another. That happens, among other things,

because each one records the facts at a different intensity level,

depending on their personal relationship to the events.

There is a phase from Einstein’s that clarifies this phenomenon:

“When a man is with a pretty girl for an hour, it seems like a

minute. But when he sits on a hot plate for a minute, it seems

like an hour”. He called this “relativity”; a brilliant idea that points

us to an experiential time which passes more quickly or slowly

depending on how the person looks at a particular experience.

In other words, there are two variables that combined give each

of us a clue about the perception of time. The first variable is

related to our senses (the facts, the hot plate), and the second

one is how we view or respond to these facts, i.e., the frequency,

the intensity and the particular way each person’s brain makes

the connections (or synapses) in response to what happened.

This understanding of perception can lead us to a positive way

to react to the unpleasant sensation of time passing faster and

faster. We can simply choose life itself (the facts) and dive into

it with special attention to every moment, making it unique and

worthy of many brain connections, enjoying it as if it was the first

time (or the last). The automatic response of worrying about time

passing too quickly is undoubtedly more comfortable but steals

the ability to live fully the moments and make them unforgettable.

It also creates that feeling of wasting time, very well portrayed in

the Pink Floyd song “Time”. And, to take the most advantage of

the time, it’s worth remembering another song “Seasons of Love”

from the Broadway musical Rent. This song suggests measuring

a year not only for its 525,600 minutes, but more importantly for

the good experiences during that time, whether at work, at home

or in the community in which we live.

It is common to imagine time as something continuous, infinite

and perhaps even cyclical. At least, that’s how Stephen Hawking

attempted to describe the shape of the time using a “Möbius Band”.

The intriguing question in this topology is that there is no inside or

outside. There is no beginning or end, we are continuously going

through the same space. In his book “The Universe in a Nutshell”,

Hawking states that most of us hardly ever pay attention to the

passage of time, but every once in a while we get amazed with

the concept of time and its paradoxes.

So, here we are, the editorial Committee of the TLC-BR, after 200

Fortnights. Is that a long or a short time? As we look at Mini Paper

number 1, four hundred weeks ago (just over four million minutes),

we can still remember the moment of its creation and also the many

adventures enjoyed in the course of its publication. Hot plates

existed, but the good chats with authors, reviewers, readers, and

even critics, recorded millions of unforgettable connections into our

brains. Each Mini Paper was unique and all of them were important.

We want new themes, authors, experiences and synapses that

allow us to enjoy the moments of its creation and publication, as

well as we hope we have provided profitable moments of reading.

That 200 more Mini Papers can come into being!

For further information: http://en.wikipedia.org/wiki/M%C3%B6bius_strip

http://en.wikipedia.org/wiki/Daniel_Kahneman

http://en.wikipedia.org/wiki/Stephen_Hawking

Page 97: Transformation and Change

97

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

ONTOLOGIES AND THE SEMANTIC WEBFábio Cossini

Although unknown to most people and professionals, ontologies

have their origin in ancient Greece, having been used by

philosophers such as Aristotle and Porphyry. Today, they

are present in various areas of human knowledge such as in

applications of artificial intelligence, knowledge management,

natural language processing and software engineering. So, what is

ontology and how is it instrumental in building the Semantic Web?

There are multiple definitions for ontology,

but one of the most popular is Tom Gruber’s:

“Ontology is a formal and explicit specification of

a shared conceptualization.” The World Wide

Web Consortium (W3C), in turn, conceptualizes

ontology as “the definition of the terms used

in the description and the representation

of an area of knowledge.” For example, an

ontology on the patterns of the Internet of

things (knowledge) would describe these

patterns (objects), their attributes (terms) and

relationships found between them.

Ontologies are considered one of the highest

levels of expressiveness of knowledge.

They cover features present in vocabularies,

glossaries, taxonomies and frames. In addition,

they allow the expression of restrictions of values (as the unique

set of values of Brazilian states) and restrictions of first-order logic

(an SSI is associated with one and only one individual).

In turn, the Semantic Web is defined by the W3C as the next major

goal of the Web, which allows computers to execute more useful

services through systems offering smarter relationships. In other

words, the Web will move from pages with content to pages with

meaning (semantics). Try doing a search with the word “Limão”

(English translation: lemon) and you’ll have results ranging from

the definition of a citrus fruit, restaurant names and a district in

the city of São Paulo.

Only you, visually, can separate what interests you from what is

outside the context of your research.

The foundations of the Semantic Web are the ontologies, which will

give meaning to the content pages in addition to relating them to

one another. Computers may run queries through agents to find

more rapidly and more precisely the set of desired information,

as well as providing the possibility of inference about it and its

relationships.

In order to give meaning to traditional Web-based pages of

static content (HTML), it is necessary that they

come accompanied by other technologies.

The Resource Description Framework (RDF),

the Resource Description Framework Schema

(RDF-S) and the Simple Knowledge Orga-

nization System Reference (SKOS) are

languages used to describe the content

of a page. Combined with ontological

languages like the Web Ontology Language

(OWL), among others, they bring out structured

knowledge enabling the use of agents

for searching and inference.

Despite the benefits of Semantic Web, some

obstacles are still present for its full adoption.

Paradoxically, there is little semantic content,

making evolution difficult. The integration

of different languages adds coding efforts so that the same

content can be interpreted to ontologies written in other languages.

And, above all, no ontological language is commonly accepted

as the ideal for the Semantic Web, in addition to not being fully

standardized among them, making it harder to integrate.

Thus, there is still an effort of standardization and adoption before

we harvest the fruits that the meaning and search automation

will bring to the Semantic Web.

For further information: Semantic Web: the Internet of the future. Karin K. Breitman.

Semantic Web for the Working Ontologist: Effective Modeling in RDFS and OWL, D. Allemang and J. Hendler.

Six Challenges for the Semantic Web, Oscar Corcho et Al.

Page 98: Transformation and Change

98

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

MASS CUSTOMIZATION: OBTAINING A COMPETITIVE ADVANTAGEClaudio Marcos Vigna

Companies are rethinking their ways of doing business, and in

order to achieve differential competitive advantages, many have

adopted the strategy of mass customization (MC).

The goal of MC is to offer unique products on an aggregate

production scale — comparable to that of mass production — at

relatively low costs. For its implementation, MC requires from

companies the agility and flexibility to meet different demands

in different quantities at costs comparable to those of standard

products with a high-quality standard.

There are some embryonic initiatives towards MC taking place

in Brazil. We can mention examples such as a home appliance

company that allows its clients to customize refrigerators and

stoves, as well as automobile companies that already allow some

components to be customized directly in the factories.

The ability to meet the clients with customized products is the wish

of many companies, since it would cut costs related to inventory

and increase customer satisfaction among those who purchased

a customized product. For this to work, however, companies must

overcome obstacles imposed by the adoption of MC.

Adoption of MC requires excellence in the functional areas

composing the operational value chain. According to the model

proposed by Claudio Vigna and Dario Miyake, this qualification

can be obtained with the development of functional skills

sustained by organizational, technical and operational resources

distributed in five critical areas. Such areas and their goals are

described below:

Product and process planning: development of customizable

products that meet customers’ needs that do not compromise the

efficiency of operational processes. An example is the development

of modular products, such as a platform of vehicles to be shared

between different models.

Supply chain logistics: improvement of the company’s relationship

with its suppliers in order to optimize processes. By adopting the

electronic data interchange (EDI-Electronic Data Interchange), it

is possible to apply Vendor-Managed Inventory (VMI) techniques

for continuous product replenishment.

Internal operations: increase of flexibility and productivity of internal

production and logistics operations, for example, with the adoption

of Flexible Manufacturing Systems (FMS) or robots able to perform

different activities according to the production program.

Distribution logistics: assertiveness and agility in logistics

operations from shipment to

delivery to the customer, for

example, adoption of cross-

docking techniques and use

of intelligent routing systems.

Marketing and sales: increase

interaction with clients through

the improvement of promotion

channels and order capture

operations, for example,

adoption of intelligent solutions

for e-commerce, engines to monitor social networking and data

mining.

The application of mass customization can be beneficial for

companies, increasing the revenue, profit, and market share, but

its adoption is not so trivial. Despite the obstacles, executives

have devoted efforts towards its implementation.

For further information: http://www.teses.usp.br/teses/disponiveis/3/3136/tde-27072007-160311/pt-br.php

http://en.wikipedia.org/wiki/Mass_customization

http://mass-customization.de/

Page 99: Transformation and Change

99

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SOFTWARE DEFINED NETWORK – THE FUTURE OF THE NETWORKS Leônidas Vieira Lisboa and Sergio Varga

The recent evolution in IT delivery models, with cloud services,

analytical systems and virtual desktops, intensifies the demand

for an IT infrastructure that is simple, scalable and flexible.

To meet these demands, the industry has progressed in the

development of server and storage virtualization technologies,

bringing greater agility in the provision of resources in a data center.

However, this progress has not been followed by the network

industry. Changes at the network layer usually require complex

interventions. It involves a low degree of automation, increasing

duration and risk of implementing

new services. There is little flexibility

to absorb traffic changes, impacting

the support for dynamic environments

such as those required by the market.

This complexity lies in the fact that

each type of network equipment

is designed to perform specific

functions. In addition, the functions

of control and delivery of packages

are carried out by each device in a

decentralized manner. Packet layer

(or data) is responsible for sending,

filtering, buffering and measurement

of packages. While the control layer

is responsible for changes in network

topology, routes and shipping rules.

A new technology was developed to expedite the provision of

communication resources, facilitate management and operation

and to simplify the network infrastructure. It is based on three

pillars. The first one is separation of control (logical) and data

delivery (physical) layers in different equipment, which allows

centralized control. The second is virtualization or physical network

abstraction, allowing to designate the best path to each traffic,

regardless of the physical infrastructure. And the third is the ability

to program network, providing network configuration automation,

i.e., external systems can automatically define the best network

configuration for a given application.

This technology is called Software Defined Network (SDN) and

promises to bring agility and flexibility to expansions and changes

in the network. The network switches become simpler and less

intelligent because all functions of the control layer are executed

by a centralized external layer, called SDN controller, excluding

network equipment and thus becoming software running on an

ordinary server.

To standardize and promote the use of the SDN, an organization

called Open Network Forum (ONF) was created. This organization

is led by business users, along with the participation of equipment

manufacturers. It encourages the adoption of SDN through the

development of open standards. One of the results of this work was

the OpenFlow, a protocol that standardizes the communication

between a SDN controller and the

data layer of network equipment.

Despite being the protocol most

associated with SDN, some manu-

facturers have begun to employ

other protocols, such as BGP

(Border Gateway Protocol) and

XMPP (Extensible Messaging and

Presence Protocol), to implement

use cases of SDN in networks that

require increased scalability. Because

there are still discussions in the market

about the maximum capacity of SDN

technology projects based solely on

OpenFlow protocol. Another important

initiative is the Open Daylight Alliance, led by networks industry

manufacturers, which proposes the creation of a robust structure,

based on Linux Foundation open source software, to build and

support a SDN solution.

Today, this technology is most suited for data center networking,

but there are initiatives present to use it in the provisioning of

telecommunication networks services. SDN usage has not yet

become massive but its development was incorporated by all

major network manufacturers. It is also interesting to note that

some cloud environments are already testing and incorporating

SDN features to obtain productivity gains in network management

and to address efficiency challenges faced today.

For further information: https://www.opennetworking.org/

http://www.opendaylight.org/

https://www.opennetworking.org/sdn-resources/onf-specifications/openflow

Page 100: Transformation and Change

100

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

A PRIVILEGED VIEW OF THE EARTHKiran Mantripragada

Remote sensing is not a new subject. In fact, the Earth is being

systematically photographed by aircrafts since the beginning

of World War I, in order to map it, recognize it and supervise it.

Today, embedded systems in satellites are cheaper and

miniaturized. The evolution of sensors enables greater spatial

and spectral resolution. That means that a single image pixel

can capture objects less than one square meter on the Earth’s

surface, while the analysis in different bands of the electromagnetic

or acoustic spectrum allows differentiating features that were

previously impossible to do so. For example, the identification

of plant species.

Since the First War, the spatial resolution

has varied from a few kilometers per pixel

to less than one meter per pixel, while

the spectral resolution today enables the

collection of images with more than 200

frequency bands. Just to give an idea,

our common cameras photograph only

the three bands of the visible spectrum,

RGB (Red, Green, Blue). There are also

satellites equipped with SONAR type

sensors, that measures the response to

acoustic waves instead of electromagnetic.

Using a relatively simple principle, the

reflection of waves, it is possible to

photograph planet Earth in a systematic manner. A sensor fitted in

an aircraft (satellite, airplane, or even a balloon) receives different

intensity values for each material that reflects your signal. For

example, a plant and the roof of a house reflect a particular

electromagnetic signal with different intensities.

Thanks to the popularization of this technology, studies in remote

sensing image processing are gaining a lot of prominence in the

universities. Some companies now also exploit this type of service

commercially, and other companies provide research on demand,

usually with lower altitudes aircraft to fulfill a specific purpose.

There are lots of applications: agricultural production, forest

monitoring, weather analysis and forecasting, military and civilian

supervision, urban planning, irregular occupation, analysis of

currents, plant and animal biodiversity analysis, measurement

and analysis of water bodies, oil and gas industry, forecasting

and monitoring of natural disasters, border control, urban growth,

transportation planning, public roads, highways, railroads, etc.

This technological breakthrough that enabled so many

applications has also brought quite challenging problems.

The pixels of a single image are large amounts of data in two

dimensions in shades of gray. We can associate this with images

with billions of pixels in hundreds of spectral bands and we

have, then, raw material for the famous Big Data with data in

the order of 100 dimensions.

Some of these data sets are freely available on the Internet. For

example, the Earth Explorer Web page at NASA/USGS (United

States Geological Survey) lets you down-

load images from anywhere in the world,

since mid-1970 until today or until the last

time the Landsat satellite passed by the

site of interest.

There are also data from several other

satellites and from products resulting from

image post-processing. For example, you

can download a map with vegetation

indexes called NDVI (Normalized Diffe-

rence Vegetation Index). This type of

information is widely used in agriculture

and forestry mapping.

Thus a new discipline starts within cognitive computing, that

seeks to capture relevant information from the universe of sensory

data. Machine learning algorithms must deal with an enormous

amount of pixels to interpret them and turn them into information

that can be used by humans.

The technology that involves remote sensing is a legacy that

became available to the world after the wars and arms and space

races. It is up to each of us, scientist, businessman, teacher,

farmer, public manager or just a curious citizen, to make use of

this fantastic set of data, fully available, many of them at no cost,

to help us to observe, watch, preserve and transform our planet.

For further information: USGS Earth Explorer:

http://earthexplorer.usgs.gov

What is Remote Sensing: http://oceanservice.noaa.gov/facts/remotesensing.html

Page 101: Transformation and Change

101

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

SMILE, YOU CAN BE IN THE CLOUDSAntônio Gaspar

The world is increasingly instrumented,

interconnected and intelligent. This

quote opened the article on page 34,

published by Mini Papers Series of TLC-

BR. In this context, let’s talk a bit about

the evolution of CCTV cameras (closed-

circuit Television) for residential use. Video

capture cameras evolved from analog

to digital models and today they are

compatible with Wi-Fi LAN technologies.

The so-called IP cameras have become

smart devices at an affordable cost for

domestic use. In addition to providing an essential function of

image capture, IP cameras include application and web interface

that offers additional features such as the capture of movement

and ambient sound, sending alerts by e-mail, SMS and social

network. They also include night vision, multiple access profile

configurations, operation schedules, etc.

Cameras usually have little or no storage capacity, allowing real-

time viewing but the recording depends on external services.

To store images, it is necessary to have recording servers (DVR

— Digital Video Recorder devices) in the supervised premises.

Thus, a citizen invests in some IP cameras that allow 24x7

monitoring and saves the images in his/her computer or DVR.

Cautiously, mitigates problems of unavailability installing no-breaks

to support cameras and recording devices. However, there is

a condition beyond his/her control to be considered: what if the

environment is invaded? What to do if the recording device is

subtracted and with it, all the images? In fact, it is common for

such events to occur. The area of residential electronic security

stumbles in ways to remotely store recorded images.

Well, the time where every residence had a personal computer

where files were stored in isolation is now in the past. We are

in the era of wireless, smartphones and the clouds, everything

smarter and interconnected. With the migration of storage, servers

and desktops to the cloud, the security systems follow the same

trend. According to Gartner’s predictions for the sector, one out

of ten companies will run its security resources in the cloud until

2015. In the residential segment, this should not be different.

Having your images captured and stored by a security camera

system located off-site, in the cloud, is possible and is a solution

for claims such as the ones mentioned before.

The advantages of the residential CCTV

recording in the cloud are numerous:

independence of local servers (fewer

equipments in the monitored premises),

protection of image loss caused by

the theft of equipments (including

the theft of the cameras, since the

images are recorded externally) and

recording backup (you can download

the recordings to mobile devices). In

addition, the residential CCTV also

offers security and privacy (control portal

access and secure communication), feasible use of wireless

cameras (less cabling, easy installation), and finally, the images

can be visualized through several types of devices, anytime

and anywhere.

However, how to contract a service like this? There are few options,

and they are very focused on the corporate market, especially

condominiums, companies and commercial establishments.

Fortunately, this profile is changing, and options for the residential

market are arising. In the local market, prices are still high, but as

the cloud is agnostic to geographical boundaries, the overseas

companies with services and attractive prices is an option, if

the English language is not a constraint.

This kind of service usually is charged by a combination of

one or more variables such as stored gigabytes, retention time,

number of cameras captured, number of frames per second,

etc. In addition, providers of this type of service, called VSaaS

(Video Surveillance as a Service), offer a vast portfolio of features

aggregated to the simple image capture. Some of the features

are recording indexing, notification of out of services cameras,

auto start recording through motion detection, alerts with photos

attached, etc. The VSaaS also offer simple recording services

packages free of charges and without obligation with a contract

term. Therefore, next time you see the sign “Smile! You’re being

recorded” remember that your image may be far beyond what

you can imagine.

For further information: http://www.researchandmarkets.com/research/m9wgm4/video

http://en.wikipedia.org/wiki/VSaaS

Page 102: Transformation and Change

102

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

IBM MAINFRAME – 50 YEARS OF TECHNOLOGICAL LEADERSHIP AND TRANSFORMATION Daniel Raisch

With the end of World War II commercial computing gained a

great boost, which led to several technology companies in Europe

and in the United States to invest in this market. IBM, which has

started on this journey in the early 50’s, reached 60’s with at least

three families of large computers in production, establishing itself

as one of the major suppliers in the market.

This apparent success did not hide the challenges that IBM faced

internally. The various lines of computers had distinct technologies

and architecture, production lines, independent management

and even incompatibility among the models in the same family

and its peripherals. Managing those lines made the production

more expensive and opened space for competition every time

a client needed an upgrade.

In this scenario, Thomas Watson Jr., the

IBM Chairman of the Board, decided to

release the project of a new computer with

the characteristics of total compatibility

among its models, peripherals and

applications. This tried to meet the

computational needs of customers in

various industries.

It was with this mission that the Executive

Bob Evans and his team of architects,

Fred Brooks, Gene Amdahl and Gerrit

Blaauw, designed the computer system named, System/360

(S/360). It is a system for all purposes, hence the name S/360.

With a budget of 5 billion USD and more than two years of work,

the S/360 started the family of IBM mainframes, the high end

and most successful computers in the market. It has become

the industry benchmark of commercial computing.

The official announcement was made by Thomas Watson Jr.

in April 7, 1964, in the town of Poughkeepsie, NY, USA.

The Mainframe, today named System Z, transformed the Company.

Seven new factories were opened to meet the demand, other

computer lines were gradually closed, and the number of

employees grew exponentially. The whole Corporation were

involved in the development of new family of computers.

The industry had also been transformed. Civil aviation progressed

due to the implementation of the SABRE reservations system.

Banks entered the online world and the man stepped on the

moon. The S/360 was present in all of this, being considered by

the American writer Jim Collins, as one of the three products of

greatest business impact, along with the first cars from Ford and

the Boeing jets. Thomas Watson Jr’s dream gear worked well

and IBM began to dominate the market. The Corporation income

grew year after year, unlike the competitors’ results.

In the mid 70’s, IBM became the largest computer company in

the world featuring among the TOP 10 companies of the world

according to Fortune Magazine.

In Brazil, during the 70’s and 80’s, due to the IT Government Policy,

which restricted the import of computers, the IBM Mainframes

had a very expressive acceptance, with

its factory setup in Sumaré, SP.

During this period, IBM Brazil had

an accelerated growth, expanded

customer base, increased sales and

opened its own branches in the main

capitals of the country, leaving a legacy

of stability for the hard 90’s.

Fifty years later, we realized that the

strength of the original architecture

together with its technological leader-

ship, allowed IBM Mainframe to remain alive in the market. It also

became relevant to its clients and for the entire Corporation till date.

Currently, Brazil occupies the third place on the world stage of

the mainframe market, which represents a significant part of the

revenue of IBM Brazil.

No other technology product remained for so long in the market,

and no other product from IBM has contributed so much to the

success of the Corporation.

We can positively state that IBM did the Mainframe and the

Mainframe did IBM.

For further information: http://ibmmainframe50anos.blogspot.com

Book: Father, Son & Co. - Thomas Watson Jr.

Book: Memories That Shaped an Industry - Emerson Pugh

Page 103: Transformation and Change

103

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

INTEROPERABILITY IN THE INTERNET OF THINGSPaulo Cavoto

There is a lot of Internet of things available over internet. The

popularization of proximity technologies such as Near Field

Communication (NFC) and Radio-Frequency identification (RFID)

and Miniaturization of components are examples. The increasing

speed and the reliability of communication networks drive the

emergence of intelligent and interoperable equipment.

In the Mini Paper on page 80 (The Challenges of the Internet

of Things), Fábio Cossini describes the main barriers to the

consolidation and the wide acceptance of this new technological

period. Moreover this transformation also brings a great disruption

in the area of software development.

A lot of things are already used in real time. The architecture of

the software that runs on these smart devices is focused on direct

communication between machines.

(M2M – Machine To Machine), but

always with a well-defined scope

of possibilities. For example, your

wine cellar may have integration with

the online stores, so your appliance

could advise whether you should

buy more wine or may possibly even

make suggestions based on your

consumption pattern. But probably this

device will not talk to your television to

give you available wine options or with your store to know which

one best harmonizes with what is being prepared, when devices

are from different brands.

The market has been dealing with heterogeneous systems and is

interconnected for quite some time. Systems with higher capacity

of extension and integration are the most common requests that

manufacturers have received. This leads to emergence of new

intelligent products most of the time, even though these products

often do not communicate with those of other manufacturers.

It is impossible to contemplate every possibility of interaction

between devices, but connectivity with all the devices must

be promoted, including those that have not been invented yet.

This possibility should be a guideline for the Internet of things to

evolve and reach more supporters. The new application projects

will not be able to predict any kind of interaction, but must be

based on an architecture that supports the dissemination and

consumption of messages.

Each new product must provide simpler communication media

and extensions through public Application Program Interfaces

(APIs) or open protocols, so that even an orchestrator of these

connections becomes unnecessary. In order to allow a greater

range of possibilities, each component should provide means of

configuration with other devices using proximity technologies or

even the Internet, similar to what we do using Bluetooth devices.

The difference is that once “paired” you can choose when to

trigger another event or what actions should be taken right after a

particular event is triggered. In this way,

the possibilities for communication

between devices are magnified, lea-

ving the choice of actions under

the control of the user, enabling the

creation of networks between devices

of different brands.

Using our example, each component

(the wine cellar, the oven and the

television) would be configured and

installed by the user once and then it

would generate and consume events. On the television we can

browse the devices that are ready to communicate and once

a device is found for ex., the wine cellar, configure what has to

be done if a certain event is triggered by it. The same could be

done between the oven and the wine cellar.

It is up to us, technology professionals, to architect our products

in an open and customizable way. Thus, the applications and

the possibilities that the Internet of Things will provide us will be

limited only by our imagination.

For further information: Mini Paper Series 184 (p. 80): The challenges of the Internet of Things

http://www-03.ibm.com/press/us/en/pressrelease/43524.wss

Page 104: Transformation and Change

104

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

AGILE PROJECT MANAGEMENT OR PMBOK®?Felipe dos Santos

Since the publication of the Agile Manifesto, in

early 2001, the software development technical

community has been discussing, comparing

and evaluating prescriptive (those that generate

more artifacts, offering a number of controls) and

adaptive (those who shape themselves during

iterations) project management methods. Based

on these definitions, we can consider the Project

Management Body of Knowledge (PMBOK®) guide,

published by the Project Management Institute

(PMI) as a prescriptive method and Scrum as

an adaptive method. Scrum is the world’s most

utilized agile methodology (6th Annual “State of

Agile Development” Survey, 2011) and, therefore,

will be used in this article.

A comparison between these approaches presents a great

paradox, since PMBOK® planning should promote change

avoidance, while in the agile methods changes are welcome.

For the PMI community, the agile methods appeared to be little

documented and organized and highly susceptible to failure, due

to a minimal amount of controls. While for the Agile community,

the existing methodologies were bureaucratic and added no

value. Over time, the two communities understood there was

room for both methodologies. The agile method arose as a new

tool for the project manager, giving more flexibility in projects of

adaptive nature. PMI recognized this change a little more than

two years ago and launched the PMI certification- Agile Certified

Practitioner (ACP) that certifies the professional with knowledge

in agile principles.

The PMBOK® advises that planning should be complete and

comprehensive and that the plan should be followed until the

final delivery of the project. This approach is appropriate in many

cases, since the PMBOK® knowledge areas assist the project

manager in achieving success, at least from the point of view of

the “iron triangle” (time, cost and scope). The agile methods, such

as Scrum, defend the idea that empirical estimates are subject

to errors and that a lot of time should not be spent planning all

the details, since many changes may occur during the project.

There are projects of iterative and incremental nature, where it is

not expected to have all the answers at the beginning. A soap

opera is a good example of this. A soap opera can take many

directions, depending on the public acceptance

and, in some cases, can even be terminated for not

meeting the TV channel objectives. Scrum would

make more sense in this kind of project. Yet for a

project to build a soccer stadium, PMBOK® would

be the most suitable, due to the strict planning

that includes risk management. In this type of

project, everything must be planned in detail in

the beginning so that the client knows exactly how

much will be spent and how soon the work will

be completed.

In software development projects, a study done

by the Standish Group (Chaos Report 2002)

showed that 64% of systems were rarely used. The remaining

part corresponds to what really mattered to end users. Scrum

guides you to prioritize what generates greater customer value.

In certain projects, this means that we can deploy something

unfinished to production, but that already offers a benefit to the

business. We must take into consideration that the customer often

does not know exactly what he/she wants at the beginning of a

project. During iterations, the client realizes that some items do

not make sense and a new requirement is needed. It is either due

to a business demand to leverage a new opportunity or change

in legislation and many others.

From the Scrum perspective, for example, items can be replaced,

removed and included, without a fault in the project because it

focuses more on customer satisfaction than at schedules and

thorough planning. There are also some challenges, for example,

transforming a traditional client (which requires defined time and

costs) into an Agile customer. A growing acceptance of the agile

methods is being observed. For example, some clients start to

accept, in some cases, synergy between project participants

and quick response to changes are worth more than a climate of

conflict among the parties. These deadlines to cost and scope

which are strongly discussed, causing problems in the client-

supplier relationship.

For further information: Mini Paper: "Agile: are you ready?" (pg. 53) Series year 7 May, 2012 – n. 157

http://www.agilemanifesto.org

http://brasil.pmi.org/

Page 105: Transformation and Change

105

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

BLOOD, SWEAT AND WEB: HOW THE WORLD WIDE WEB WAS CREATEDMarcelo Savio

The emergence of the World Wide Web 25 years ago is commonly

referenced as a landmark founder of a new era, where Internet

has expanded beyond the walls of universities and research

centers. Despite its enormous importance and almost ubiquity

in today’s world, the Web has a highly contingent history, with

precariousness, tensions and forks, common to many other facts

or technological artifacts. In fact, it only moved forward thanks to

inspiration and, more importantly, perspiration, from its selfless

builders Tim Berners-Lee and Robert Cailliau, both from CERN,

an international physics laboratory located in Geneva (Switzerland).

The British physicist Tim worked there as a software developer

when he idealized a system to obtain information about the

connections among all the people, teams, and equipment and

projects underway at CERN. In March 1989, he wrote a proposal

to the board, in which he requested resources for the construction

of such a system. He did not get any reply.

That’s when Robert Cailliau, a Belgian computer engineer met

Tim in his first passage at CERN, to whom he explained his ideas

and hardships. Robert, a technology enthusiast and veteran of

the laboratory, became a key ally, since he had an extensive

contact network and a providential persuasion capacity. He

rewrote the proposal in more attractive terms and got not only

the approval from the same board, but also extra money, new

machines, helper students and rooms to work. Tim was able to

code the first versions of the main elements of the Web: the Hyper

Text Markup Language (HTML) language, the Hyper Text Transfer

Protocol (HTTP) protocol, the Web server and the client (browser).

In 1991, Robert and Tim got approval to demonstrate the first version

of the Web in Hypertext-91, a major international conference about

hypertext in the USA. In fact, they sent a paper that was rejected

“due to lack of scientific merit”, but with the usual persistence

they managed to convince the event’s organizers to let them

perform a live demo. They went enthusiastically to the USA, but

they barely knew that the difficulties were just getting started.

When they arrived at the location, they discovered that they had

no way to connect to the Internet. Robert managed to convince

the hotel manager to extend a pair of telephone wires and then

welded them to an external modem they had brought because

there was no compatible connector. To get an Internet connection,

Robert called the nearest university and found someone that

allowed them to use a dial-up service, from where it was possible

to connect to the remote Web server that was prepared at CERN.

The demonstration was declared as success. After the conference,

all projects had something to do with the Web and from there on

they started winning the world.

With the announcement, numerous improvements and suggestions

were provided. It was time to revisit IETF, the forum responsible

for the Internet standards and technical specifications. But only in

1994, after two years of endless discussions, they finally managed

to approve the first Web specification. They organized the first

WWW Conference at CERN in the same year, to focus on the

future. They announced that the Web codes would be in the

public domain and they would create a specific standardized

consortium to deal with Web issues (W3C).

The nascent technology was duly prepared to gain the relevance

that it was entitled in the history of the Internet and humanity.

The creation of the Web, through the combination of hypertext

along with the computer networks, showed us that it is possible to

create a tremendous innovation from widely available consolidated

technologies. Furthermore, the journey to achieve an innovation

was always difficult and demands not only technical competence

but a lot of determination from its creators. Think about it when

you open the next Web page in your browser.

For further information:http://www.webat25.org

http://www.w3.org/People/Berners-Lee/Weaving/Overview.html

Page 106: Transformation and Change

106

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

DIRECT MEMORY ACCESS: VULNERABILITY BY DESIGN?Felipe Cipriano

FireWire is a high-speed serial interface created by Apple as a

replacement for SCSI and in a way an USB competitor.

One of the advantages of FireWire is the possibility of obtaining

direct memory access, without operating system intervention. This

allows for faster transfers and decreases the latency between

the device and the computer.

There is no coincidence that FireWire is widely used in audiovisual

editing. In the scenarios that require the least possible delay (for

example, real-time editing), any operating system interference

would be quite noticeable.

But the direct memory access has its own disadvantages: FireWire

is a hot-swappable interface can connect to an already running

computer and have privileged access to the system memory,

that, most likely, contains confidential information. Thus, it is

possible to get a memory dump, a copy of the entire memory

content, just by connecting to the FireWire port of a computer,

even if it is locked by a password.

One of the common attacks exploiting the FireWire’s direct memory

access is to get this memory dump, analyze it and search for

the information.

In some operating systems the current user id’s passwords are

not encrypted and are exposed in memory as pure text. Even

though the system takes care of the passwords, it is quiet possible

to obtain data from recently opened documents or even exploit

flaws in third-party programs.

But the attack that I find most interesting is the direct manipulation

of code already loaded into memory to circumvent system security.

Just like GameShark did with videogames. This type of attack

modifies the authentication libraries loaded into memory to accept

any password. This type of attack is very discreet, since it does

not change system files and hardly anyone would find strange that

his/hers (legitimate) password being accepted by the system. It

is also effective even on machines with disk encryption, since the

keys are already loaded into memory to run the operating system.

In addition to the scenario of a machine being exploited during

the absence of the user, this technique can be used to get access

for suspended machines, since the memory is kept online.

When the machine returns from suspended state, the passwords

– like the BIOS or disk encryption programs – are not necessary to

reactivate the system. This attack is pretty fast since it already knows

the memory addresses that are typically used for authentication

on each system.

In order to avoid this, most recent operating systems implement

Address Space Layout Randomization (ASLR), a method that

uses different memory addresses, each time a program is started.

But this protection just decreases the speed of the attack, since

in these systems it is necessary to get a complete memory dump

to look for the memory addresses, where the authentication

code is stored.

One of the most common solutions is to block the FireWire serial

driver, which is enough to avoid Direct Memory Access (DMA)

attacks. Another solution is to completely block the use of FireWire

ports, either by removing drivers or isolating ports. On Mac OS

X and Linux, it is possible to disable only the DMA. In the case

of OS X, when using disk encryption, the system automatically

blocks the DMA through FireWire, when the screen is locked

with a password.

And despite the FireWire interface going into retirement, this

attack is quiet possible in any hotplug interface that has direct

memory access, such as Thunderbolt, which is perceived, as

a FireWire replacement.

For further information: https://www.os3.nl/_media/2011-2012/courses/rp1/p14_report.pdf

http://www.breaknenter.org/projects/inception/

Page 107: Transformation and Change

107

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

BIG DATA AND THE NEXUS OF FORCESAlexandre Sales Lima

In recent years we observed a significant change in the IT market

led mainly by the confluence of the following forces: cloud, social

media, mobility and information (Big Data). This last is at the

epicenter of change because its development is intrinsically

related to the growth and the confluence of other forces. Given

this scenario, one question stands out: how to surf on the wave

of Big Data opportunities at the nexus of these forces?

If we look at each one of them we can observe the following

scenario:

Cloud: the increase in adoption of cloud computing solutions is

providing more agility, scalability, capacity and dynamism to the

corporate world, allowing the offer of new and better services.

Social Media: extremely widespread in interpersonal context has

a very diverse set of information types (text, video, relationships

and preferences). It has consolidated its position as a powerful

communication channel from social and corporate viewpoints,

giving an active voice to citizens and consumers.

Mobility: with a global growth above two digits, smartphone

utilization has changed in as many ways as society behaves and

correlates. The adoption of 3G and 4G technologies introduces

a level of ubiquity and comprehensiveness in the collecting of

information never seen in the history. For example, more than

60% of Twitter users access the application through mobile

devices. This does not take into account connected devices

that emit signals and data continuously.

Information: in addition to companies natural systemic data growth,

we have today a large increase also of human collaboration-

related data, such as e-mails, web pages, documents, instant

messaging conversations and social media technology.

If you look at the dynamics between these forces, we can see that

each enhances the other into a spiral of increasing capacities and

volume of data. Gartner calls this confluence “Nexus of Forces”,

“IDC calls it “The Third Platform”, and The Open Group refers to

it as “Open Platform 3.0”. Regardless of the name we can see

that Big Data is at the heart of this change of business scenario,

either as a catalyst or as a by-product of the business process.

But how to take advantage of it?

The first step is to understand what Big Data is. Forrester defines

Big Data as a set of techniques and technologies that make data

manipulation in the extreme scale affordably. The second step is

to understand what we can do with it. Big Data lets you analyze

more information, faster and deeper, helping us to understand

the world in a way unthinkable not so long ago. It also enables

us to find value and business opportunities where none was

previously conceived. For example, allowing a large corporation

to have individualized and personalized interactions in scale.

However the third step is the most important. How to do this? First

of all we need to understand that Big Data itself is not important.

If we don’t put the data within a meaningful context, we won’t be

able to surf in this tsunami of data. Finally, we have the technology

that enables this vision: Hadoop, a distributed system for storing

and retrieving information, distributed computing to process

data streams at high speed, and advanced analytics to identify

patterns and trends in this sea of information.

The convergence of these forces is providing not only a change

in IT scenery, but also is promoting a change in current business

processes. The combination of these components enables the

extraction of data value and competitive advantage for companies

that know how to use them. To stay competitive in this new market

it is necessary that not only companies, but IT professionals

master these new concepts

For further information: http://www.ibmbigdatahub.com/

http://www.gartner.com/technology/research/nexus-of-forces/

https://ibm.biz/BdDwS7

Page 108: Transformation and Change

108

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

DEMYSTIFYING VIRTUAL CAPACITY, PART IJorge L. Navarro

In a virtualized system, virtual machines share the underlying

physical machines (PM). Physical machines have a well-defined

processing capacity, but how much of it actually goes to a specific

virtual machine (VM)?

Assuming we denote by virtual capacity that which is actually

used by a VM. What are the parameters that determine the virtual

capacity? Hypervisors and different virtualization technologies

may use different names but, in all cases, the concepts behind

them are the same.

Physical Machine Capacity. The virtualization layer distributes

the PM capacity among its associated VMs. Available resources

in a PM define the virtual capacity limit: a hosted VM cannot be

larger than the host PM and the extra processing required due to

the virtualization itself (overhead) should also be considered. The

capacity is typically measured in processor cores or aggregate

CPU cycles (MHz).

Guaranteed Capacity. This is the capacity ensured to be made

available to a VM whenever demand requires so. For example,

consider a guaranteed capacity of 4 cores. In case the load

requires 2 cores, the VM will use 2 cores. But should demand

increase to 5 cores, the VM will be then guaranteed to have

4 cores while the additional one may or may not be available

depending on additional factors. This is also known as nominal

and reserve and it is measured in capacity units.

Exclusive use attribute. Flag which indicates whether the

guaranteed capacity is reserved for the exclusive use of a given

VM. If not, the guaranteed unused capacity is made available

to be used by the remaining VMs that share the PM. This is also

referred to as dedicated use.

Limit / cutoff attribute. Flag which indicates whether the guaranteed

capacity can be exceeded or not, though the VM’s virtual capacity

can go beyond the guaranteed capacity if necessary. Some

hypervisors specify a capacity limit which is not tied to the

guaranteed capacity.

Virtual cores. A fundamental concept, though sometimes a complex

one, is the link between the physical and the virtual worlds. The

operating system within a VM sees virtual cores and delivers the

execution of processes to them, which is then followed by the

hypervisor allocating physical cores to the virtual ones. The number

of virtual cores may limit virtual capacity, i.e. a VM with 2 virtual cores

can never have a virtual capacity greater than 2 physical cores.

Relative priority. This parameter specifies relative priorities among

VMs that compete for capacity. Such a competition may happen

when aggregate demand is greater than the PM capacity. Common

names for this concept are uncapped weight or shares.

Virtual capacity, in fact, depends on all the above factors. Let

us consider a simple scenario: 2 VMs, red and blue, sharing 8

cores from a PM.

Both red and blue VMs are defined in the same way: guaranteed

capacity equal to 4 cores, no exclusive use, uncapped / unlimited,

8 virtual cores and relative priority of 128.

What would happen when red VM users request a demand

for 5 cores on their VM, at the same time when blue VM users

request a demand for 5 cores on their VM? According to the

above parameterization, it’s possible for the red VM to use 5

physical cores because it’s uncapped/unlimited and it has at

least 5 virtual cores. But to be able to go beyond its guarantee

(4 cores), there must be available free physical capacity. And

this is not the case, because the other VM -the blue VM- is using

its 4 cores of guaranteed capacity.

So, the final capacity distribution under the conditions above is

that each VM is using 4 cores and, consequently, the PM is 100%

occupied (i.e., all 8 cores are allocated). Hence, from a sizing

perspective, the PM was undersized, failing to meet all demands.

What would happen if the demand from the blue VM decreased

from 5 to 1 core? In the second part of this article we will discuss

more complex and subtle cases.

For further information: https://www-304.ibm.com/connections/blogs/performance/entry/demystifying_virtual_capacity_part_i?lang=en_us

https://www-304.ibm.com/connections/blogs/performance/?lang=en_us

http://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.resmgmt.doc%2FGUID-98BD5A8A-260A-494F-BAAE-74781F5C4B87.html

http://www-03.ibm.com/systems/power/software/virtualization/resources.html

Page 109: Transformation and Change

109

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

DEMYSTIFYING VIRTUAL CAPACITY, PART IIJorge L. Navarro

In the previous Mini Paper we defined the concept of virtual

capacity and identified the generic factors that it depends on:

physical machine (PM), guaranteed capacity, exclusive use

attribute, cutting/edge attribute and relative priority.

A very simple scenario was proposed: a PM with 8 cores and two

VMs, red and blue, parameterized with the following settings: 4

cores guaranteed capacity, without exclusive use, without cutting/

edge, 8 Virtual cores and relative priority of 128.

If the red demand is 5 cores and blue is 1 core, what will the

capacity distribution be?

The blue will receive only 1 core, since this demand is much

smaller than the guaranteed 4 cores. The 3 remaining cores are

not used. Since the exclusive use attribute is not activated, they

are transferred back as free capacity.

The red usage increases to 5 cores, 4 of them coming from

guaranteed capacity and 1 additional core coming from the free

capacity. In this situation the PM is 75% occupied (6 of 8 cores)

and there are no unmet demands.

A general rule is that if all VMs are without cut and exclusive use

and the sum of all demands result in a number that is less than

the PM capacity, then all of the VM demand can be met.

Let’s consider a concurrency case e.g. the PM capacity is not

enough to satisfy the sum of the VM demand, and all VMs are

without cut and exclusive use. How is the PM capacity, now

scarce, distributed?

Suppose the same 8 cores PM, with 3 VMs (red, blue and green)

and the following settings: 3 cores guaranteed capacity for red

and blue and 2 cores for the green, without exclusive use, without

cutting/edge, 8 Virtual cores and relative priority of 128.

The demands are: green requests 1 core, while red and blue

request 4 cores each. The demand adds up to 9 cores, more

than the physical capacity (8 cores).

In the first group we have one or more VMs with demand less than

or equal to the guaranteed capacity: they are met and the rest of

the guaranteed capacity is ceded, increasing the free capacity.

The green VM is in this group: it uses 1 core and cedes 1 core.

In the second group are the VMs with demands greater than their

guaranteed capacity: they receive their guaranteed capacities

plus a proportion from the free capacity, according to their relative

priorities. The red and blue are in this group - both use 3 guaranteed

cores plus another half coming from the free core nucleus divided

into two equal parts, as both VMs have the same priority and

therefore receive the same fraction. In this situation, the PM is

100% used and there are unmet demands – red and blue.

What would happen if the green VM was switched off or its demand

dropped to zero? Or if the blue VM has a cutting/edge requirement?

Or if the red VM changes to 2? Or if the green demand goes up

to 4 cores? Or if the blue guaranteed capacity is exclusive use?

And if ... and if ... and if ...

The calculations to solve a generic case, if you know what to

do and how to do it, are simple. I created a spreadsheet by

implementing these calculations, to be used as a helpful tool:

the virtual capacity demystifier. It comes with a presentation

that illustrates its use. Perform experiments with the tool to fully

understand virtual capacity.

One last point: maybe I should add the following caption:

“... in a perfect world”. In the real world there are second order

effects – overload (overhead), inefficiencies, cache loss - which

decrease the virtual capacity we have. These effects belong to

the advanced technician and guru’s universe, but you should

be aware of their existence.

For further information: https://www-304.ibm.com/connections/blogs/performance/entry/demystifying_virtual_capacity_2nd_part_and_tool?lang=en_us

https://www-304.ibm.com/connections/blogs/performance/?lang=en_us

http://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.resmgmt.doc%2FGUID-98BD5A8A-260A-494F-BAAE-74781F5C4B87.html

http://www-03.ibm.com/systems/power/software/virtualization/resources.html

Page 110: Transformation and Change

110

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

As mentioned in the preface, we are living in a time of intense

transformations. Political, economic, and technological. Transfor-

mations, in turn, impose the need for re-education/new learning.

Contemplating the 100 or so mini papers that make up this book,

one may have a good idea of the topics that will be dominant,

and therefore, that should compose the agenda of those who

want to prepare themselves for these changes. It is worth saying

that, even though each mini paper, intentionally, did not delve

deep in its respective theme, it allowed the reader to acquire

an initial knowledge of the subject, indicating references for

more information.

To compile these documents, however, involved considerable

effort. From the effort spent by each author to develop the subject

in a concise, yet interesting form, to the effort of the reviewers,

translators (the mini papers have been written mostly in Portuguese

and then translated to English), translation reviewers and Argemiro

Lima and Maria Carolina Azevedo whom coordinated the entire

process, including the administrative part of obtaining financial

resources and procuring suppliers. All of this done in the form of

volunteer work, over and above the day job of each contributor.

Such mobilization relates to the practices that derive from our

company values aimed precisely to allow IBM to operate the

transformations that are expected from it. For example, “unite to

get it done” (clearly demonstrated by the mobilization involved

in this work) and “show personal interest” (without which, we

would still be with an unfinished book).

So, I would like to place on record, on behalf of the TLC-BR

(Technology Leadership Council), my most sincere appreciation,

gratitude and admiration for all those who have made the intention

of this second book a reality, published in 2 languages, in digital

and printed form. The authors, IBMers and ex-IBMers, the reviewers

from the Editorial Committee, the translators from IBM Brazil’s

technical community, the translation reviewers from the IBM

Academy of Technology and the leaders of the Editorial Committee.

Finally, it is worth noting that the production of the mini papers

(fortunately) does not stop. And it’s been nine years now, on a

biweekly basis.

Adelson Lovatto

Adrian Hodges

Adrian Ray

Agostinho Villela

Alberto Eduardo Dias

Alberto Fernando Ramos Dias

Alex da Silva Malaquias

Alexandre Sales Lima

Alexis da Rocha Silva

Anderson Pedrassa

André Luiz Coelho da Silva

André Viana de Carvalho

Argemiro José de Lima

Argus Cavalcante

Ashish Mungi

Atlas de Carvalho Monteiro

Bianca Zadrozny

Boris Vitório Perez

Brendan Murray

Bruno da Costa Flach

Carlos Fachim

Carlos Henrique Cardonha

Carolina de Souza Joaquim

Caroline Pegado de Oliveira

Cesar Augusto Bento do Nascimento

Christian Prediger Appel

Claudio Keiji Iwata

Cleide Maria de Mello

Colleen Haffey

Daniela Kern Mainieri Trevisan

David Losnach

David R. Blea

Debbie A. Joy

Denis Vasconcelos

Denise Christiane Correia Gonçalves

Denise Luciene Veroneze

Diane Ross

Eduardo Furtado de Souza Oliveira

CLOSING REMARKS AND ACKNOWLEDGMENTS Agostinho de Arruda Villela, TLC-BR Chair

Page 111: Transformation and Change

111

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL

Fabio Cossini

Felipe Grandolpho

Fernando Ewald

Fernando Padia Junior

Fernando Parreira

Flávia Aleixo Gomes da Silva

Flavia Cossenza Belo

Flavia Faez Muniz de Farias

Gabriel Pereira Borges

Gerson Itiro Hidaka

Gerson Makino

Gerson Mizuta Weiss

Glauco Marolla

Guilherme Correia Santos

Guilherme Galoppini Felix

Hema S Shah

Jeferson Moia

João Claúdio Salomão Borges

João Francisco Veiga Kiffer

João Marcos Leite

João N Oliveira

John Easton

John Fairhurst

José Alcino Brás

Juliana Costa de Carvalho

Katia Lucia da Silva

Kelsen Rodrigues

Leonardo Garcia Bruschi

Liane Schiavon

Louise de Sousa Rodrigues

Luiz Gustavo Nascimento

Marcel Benayon

Marcelo França

Marco Aurélio Cavalcante Ribeiro

Marco Aurélio Stelmar Netto

Marcos Antonio dos Santos Filho

Marcos Sylos

Marcos Vinícius Gialdi

Marcus Vinícios Brito Monteiro

Maria Carolina Feliciano de Oliveira e Azevedo

Miguel Vieira Ferreira

Nicole Sultanum

Odilon Goulart

Paolo Korikawa

Patrick R Varekamp

Patti Foley

Paulo Emanuel Critchi de Freitas

Paulo Huggler

Priscilla Campos Kuroda de Carvalho

Rafael Cassolato de Meneses

Reinaldo Tetsuo Katahira

Renan Camargo Pacheco

Rosane Goldstein G. Langnor

Rosely Oga Miyazaki

Ruth Gibrail Tannus

Sandipan Sengupta

Sandra Mara Gardim Rocha

Sandra Woodward

Sara Elo Dean

Sergio Varga

Shephil Philip

Shweta Gupta

Steve Heise

Tarik Maluf

Tatiana Brambila Corghi

Teresa Raquel Souza do Nascimento

Thiago Guimarães Moraes

Thiago Signorelli Luccas

Thomas Mailleux Sant’Ana

Tiago Moreira Candelária Bastos

Vandana Pandey

Vitor Hugo Lazari Pavanelli

Washington Cabral

Wellington Chaves

Page 112: Transformation and Change

TECHNOLOGY LEADERSHIP COUNCIL BRAZIL