department of computer science & engineering · 3 department of computer science &...

49

Upload: others

Post on 30-Aug-2020

22 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532
Page 2: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

2

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

CONTENT

EditorialBoard

Department Vision & Mission

Department Programme Educational Objectives

Department Programme Outcomes

Technical Corner

o Rise of Cyrpto Currency

o Tesla, and self-driving tech

o The disappearing phone

o Machine learning steps in it

o Augmented Reality

o Smart Speakers

o Researchers Crack the Code of 'Flying Doughnuts'

o Hot Solar Cells

o Paying with Your Face

o Here’s what AlphaGo’s historic win means for the enterprise

o Hands-free help from the Google Assistant.

Students Highlights

Faculty Hightlights

CSE in News

Page 3: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

3

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

GMR Institute of Technology, Rajam – 532 127, A.P.

(An Autonomous Institute, Approved by AICTE, Accredited by NBA, NAAC with ‘A’ Grade & Affiliated to

JNTU-K)

An ISO 9001-2008 certified organization

EditorialBoard

Principal

Dr. C.L.V.R.S.V. Prasad

Vice Principal

Dr. J. Raja Murugadoss

Head of the Dept

Dr. A.V. Ramana

Faculty Co-ordinator

Mr D Siva Krishna,

Assistant Professor

Committee Members

Dr. V. Sreerama Murthy

Dr. R. PriyaVaijayanthi

Dr. S. S. Gantayat

Dr. V Prasad

Dear Readers

Sub: “Techtronic Magazine”- December 2017.

We are glad to inform you that the Department of CSE, GMR

Institute of Technology, Rajam, Andhra Pradesh released its new version

of MagazineTECHTRONIC, Department Level technical Magazinein

the Month of December 2017, Volume 3 Issue 1.

These magazine series gives the scope for the student and faculty

members to motivate themselves by knowing the excellence of

others. This magazine almost makes the student/ faculty near to

the current world and its crisis.

With warm regards,

Yours Sincerely,

Head of the Dept

Dr. A.V. Ramana

Page 4: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

4

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Vision

To be a nationally preferred department of learning for students and teachers alike, with dual commitment to research and serving students in an atmosphere of innovation and critical thinking.

Mission

To provide high-quality education in Computer Science and Engineering to prepare the graduates for a rewarding career in Computer Science and Engineering and related industries, in tune with evolving needs of the industry

To prepare the students to become thinking professionals and good citizens who would apply their knowledge critically and innovatively to solve professional and social problems

Vision & Mission

Page 5: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

5

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Program Educational Objectives (PEOs)

I. Acquire logical and analytical skills with a solid foundation in core areas of computer Science & Information Technology

II. Accomplish with advanced training in focused areas to solve complex

real-world engineering problems and pursue advanced study or research

III. Demonstrate professional and ethical attitude, soft skills, team spirit, leadership skills, and execute assignments to the perfection

Program Educational Objectives (PEOs)

Page 6: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

6

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Program Outcomes Engineering graduate will be able to PO 1: Apply the knowledge of mathematics, science, engineering fundamentals, and an engineering specialization to the solution of complex engineering problems. (Engineering knowledge) PO 2: Identify, formulate, review research literature, and analyze complex engineering problems reaching substantiated conclusions using first principles of mathematics, natural sciences, and engineering sciences. (Problem analysis) PO 3: Design solutions for complex engineering problems and design system components or processes that meet the specified needs with appropriate consideration for the public health and safety, and the cultural, societal, and environmental considerations. (Design/development of solutions) PO 4: Use research-based knowledge and research methods including design of experiments, analysis and interpretation of data, and synthesis of the information to provide valid conclusions. (Conduct investigations of complex problems) PO 5: Create, select, and apply appropriate techniques, resources, and modern engineering and IT tools including prediction and modeling to complex engineering activities with an understanding of the limitations. (Modern tool usage) PO 6: Apply reasoning informed by the contextual knowledge to assess societal, health, safety, legal and cultural issues and the consequent responsibilities relevant to the professional engineering practice. (The engineer and society) PO 7: Understand the impact of the professional engineering solutions in societal and environmental contexts, and demonstrate the knowledge of, and need for sustainable development. (Environment and sustainability) PO 8: Apply ethical principles and commit to professional ethics and responsibilities and norms of the engineering practice. (Ethics)

Program Outcomes

Page 7: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

7

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

PO 9: Function effectively as an individual, and as a member or leader in diverse teams, and in multidisciplinary settings. (Individual and team work) PO 10: Communicate effectively on complex engineering activities with the engineering community and with society at large, such as, being able to comprehend and write effective reports and design documentation, make effective presentations, and give and receive clear instructions. (Communication) PO 11: Demonstrate knowledge and understanding of the engineering and management principles and apply these to one’s own work, as a member and leader in a team, to manage projects and in multidisciplinary environments. (Project management and finance) PO 12: Recognize the need for, and have the preparation and ability to engage in independent and lifelong learning in the broadest context of technological change. (Life-long learning) PSO 1: Understand of social & civic responsibilities, and rights of individuals or groups while developing software tools. (Program Specific) PSO 2: Demonstrate personal strengths & limitations, committed to critical thinking and performance evaluation to manage software projects. (Program Specific)

Page 8: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

8

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Rise of cryptocurrency

If you hadn't already bought into the bitcoin craze, 2017 became the year you wish you had. Cryptocurrency grabbed our attention when hackers demanded it as ransom following the HBO hack, and as payment to unlock computers infected with WannaCry ransomware.

But that was only the beginning. Prices for bitcoin and ether shot way, way up, and pretty much everyone and their grandparents started to get in on the cryptocurrency market. Big banks bought in with the start of futures trading, and exchanges like Coinbase were even forced to remind everyone to just chill for a second. Someone even started selling cryptocurrency sweaters.

Whether or not bitcoin and its ilk will become a permanent mainstay of our economy, or crash and burn in the ferocious popping of a bubble, remains to be seen. But, after 2017, we'll all be watching.

….. Article Submitted By Janapana Hemanth Kumar, 4 CSE A

Technical Corner

Page 9: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

9

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Tesla, and self-driving tech

No hands.

This was the year self-driving cars became real. Sure, various forms of the tech existed before 2017, but over the course of the last 12 months we saw supposedly fully autonomous vehicles actually hit the streets of a major U.S. city with no driver behind the wheel. That is a big deal.

Tesla, of course, has grabbed headlines with its Model S, Semi, and new Roadster, but when it comes to autonomous tech, Waymo, Uber, and Lyft are all nipping at the company's heels.

Self-driving cars are here, and they're not going away. The technology developed in 2017 will have a profound impact on how goods and people travel around the country in the future.

….. Article Submitted By Jana Kiran, 4 CSE A

Page 10: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

10

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

The disappearing phone

Where'd it go?

Say goodbye to the bezel. Over the course of 2017, companies like Apple and Samsungproved that nearly bezel-less phones are now the norm. The iPhone X is perhaps the most notable example of this, with an abandonment of the phone's chin and an embrace of the notch.

This past year kicked off a race toward phones that are all screen, and coming models will likely push this trend even further.

….. Article Submitted By Mantri Surya, 4 CSE B

Page 11: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

11

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Machine learning steps in it

Knowing enough to be dangerous.

This year, we got a rare peek behind the machine-learning curtain, and, sadly, what we saw wasn't that inspiring. Sure, some developments gave us cause for cautious optimism, but it was the missteps that really defined 2017.

Google's artificial intelligence was called out for being both homophobic and sexist, and people began to realize that computers can inherit — and amplify — the biases of the people they seek to replace.

But there is some hope. People have started to realize the importance of building ethics into AI, and the well-publicized blunders of 2017 could provide a cautionary roadmap for moving forward. However it shakes out, though, the game has changed.

….. Article Submitted By Lingam Harsha, 3 CSE B

Page 12: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

12

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Augmented Reality

This couch changes everything.

Mark Zuckerberg and Facebook tried to set this year's agenda early on at F8, claiming that augmented reality is the next major platform. And while most people are not using AR in their daily lives (except dancing hot dog fans), the technology is expanding.

Ikea even got in on the game, publishing an AR app that allowed customers to virtually place furniture in their homes before purchasing. But that's only the tip of the iceberg. Apple released its ARKit — an augmented reality platform for the iPhone. Importantly, this requires no new hardware (other than a compatible iPhone), and promised to bring AR to the masses.

This has the potential to reshape how we game, consume art, and, well, pretty much do everything. And 2017 was the year it really kicked off.

….. Article Submitted By R Madhusudhan Reddy, 4 CSE C

Page 13: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

13

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Smart Speakers

Always listening for your every command.

The Echo and Google Home have taken the world by storm, and Apple is racing to catch up with the HomePod. Meanwhile, not content to be in one room of your house, Amazon has released additional in-home smart devices that purport to do everything from give you style tips to replace your alarm clock.

And while there has been some pushback to home-integrated smart devices, 2017 essentially primed the smart speaker to take over your entire home. That's a trend that isn't about to change.

….. Article Submitted By M Kaladhar, 4 CSE B

Page 14: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

14

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Researchers Crack the Code of 'Flying Doughnuts'

Scientists have figured out how to make doughnut-shaped pulses of light. And, no, you can't eat them — but this is a big deal for at least three other reasons:

1. "Doughnut-shaped pulses of light" is a fun phrase to write and think about.

2. The doughnut-shaped pulses could help scientists probe strange, doughnut-shaped magnetic

formations in certain kinds of matter.

3. For the first time, scientists might be able to create waves with what physicists call "space- and time-

dependent functions." Every electromagnetic wave that's ever been created can be described using an equation, if we know its position in time or space, said NikitasPapasimakis, one of the theorists behind the discovery and a physicist at the University of Southampton. [The Mysterious Physics of 7 Everyday Things ]

For instance, an electromagnetic pulse shaped like a sine wave, like the one illustrated below, looks more or less the same 5 seconds after it appears as it does 30 seconds after it's appeared (or, say, 5 or 30 feet from where it appeared). To describe it, you only need to know its position in time or space.

"Flying doughnuts" are also waves; they're part of a class of special, theoretical waves first proposed in 1996 (which also include something called "focused pancakes") that are so weird and complex that, in order to work, the equations describing them require knowing the waves' position in both space and time, Papasimakis told Live Science.

If scientists generated a flying doughnut in the real world, it would be the first time humans ever created such a complicated wave.

Besides bragging rights, scientists want to create these waves for a more practical reason, so they can start to understand a weird behavior sometimes seen in matter, Papasimakis said.

In fact, a great deal of Papasimakis' recent work has focused on this strange behavior of matter. Under certain circumstances, matter gets electromagnetically excited. Scientists have a good understanding of the more common versions of this effect, like the two-ended magnets you stick on your fridge. But there's a less common version, the "toroidal magnetic excitation" — basically a doughnut-shaped, magnetically excited area within a chunk of matter — that is governed by physics that scientists are still figuring out.

It's not very well-studied, Papasimakis said — in part, because the effect is so weak.

Flying doughnuts, he said, could help researchers probe these toroidal excitations.

Page 15: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

15

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

In order to generate a flying doughnut, scientists will need to build a special material that's essentially made up of a series of carefully arranged antennas, according to Papasimakis and his colleagues' paper, published May 23 in Physical Review B. The antennas could be different sizes and distances apart, depending on how big a doughnut you were trying to generate, he added.

The next step, he said, is to actually build one of these arrays and fire off a flying doughnut in real life. He and his colleagues, he said, are already working on it.

What is Augmented Reality?

Augmented reality is the result of using technology to superimpose information — sounds, images and text — on the world we see. Picture the "Minority Report" or "Iron Man" style of interactivity.

Augmented reality vs. virtual reality

This is rather different from virtual reality. Virtual reality means computer-generated environments for you to interact with, and be immersed in. Augmented reality (also known as AR), adds to the reality you would ordinarily see rather than replacing it.

Augmented reality in today's world

Augmented reality is often presented as a kind of futuristic technology, but a form of it has been around for years. For example, the heads-up displays in many fighter aircraft as far back as the 1990s would show information about the attitude, direction and speed of the plane, and only a few years later they could show which objects in the field of view were targets.

In the past decade, various labs and companies have built devices that give us augmented reality. In 2009, the MIT Media Lab's Fluid Interfaces Group presented SixthSense, a device that combined the use of a camera, small projector, smartphone and mirror. The device hangs from the user's chest in a lanyard fashion from the neck. Four sensor devices on the user's fingers can be used to manipulate the images projected by SixthSense.

Google rolled out Google Glass in 2013, moving augmented reality to a more wearable interface; in this case, glasses. It displays on the user's lens screen via a small projector and responds to voice commands, overlaying images, videos and sounds onto the screen. Google pulled Google Glass at the end of December 2015.

As it happens, phones and tablets are the way augmented reality gets into most people's lives. Vito Technology's Star Walk app, for instance, allows a user to point the camera in their tablet or phone at the

Page 16: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

16

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

sky and see the names of stars and planets superimposed on the image. Another app called Layar uses the smartphone's GPS and its camera to collect information about the user's surroundings. It then displays information about nearby restaurants, stores and points of interest.

Some apps for tablets and phones work with other objects as well. Disney Research developed an AR coloring book, in which you color in a character in a conventional (though app-compatible) book and launch the app on the device. The app accesses the camera and uses it to detect which character you are coloring, and uses software to re-create the character in 3D character on the screen. One of the most popular ways AR has infiltrated everyday life is through mobile games. In 2016, the AR game "Pokémon Go" became a sensation worldwide, with over 100 million estimated users at its peak, according to CNET. It ended up making more than $2 billion and counting, according to Forbes. The game allowed users to see Pokémon characters bouncing around in their own town. The goal was to capture these pocket monsters, then use them to battle others, locally, in AR gyms.

In 2018, "Harry Potter: Hogwarts Mystery" became the mobile AR gaming sensation. The game lets users see the Hogwarts world around them while having the ability to cast spells, use potions and to learn from Hogwarts teachers. As of this writing, the game had around 10 million downloads in the Google Play store.

Researchers are also developing holograms, which can take VR a step further, since holograms can be seen and heard by a crowd of people all at once.

"While research in holography plays an important role in the development of futuristic displays and augmented reality devices, today we are working on many other applications, such as ultrathin and lightweight optical devices for cameras and satellites," researcher Lei Wang, a doctoral student at the ANU Research School of Physics and Engineering, said in a statement.

The future of augmented reality

This doesn't mean that phones and tablets will be the only venue for AR. Research continues apace on including AR functionality in contact lenses, and other wearable devices. The ultimate goal of augmented reality is to create a convenient and natural immersion, so there's a sense that phones and tablets will get replaced, though it isn't clear what those replacements will be. Even glasses might take on a new form, as "smart glasses" are developed for blind people.

….. Article Submitted By Surya Patnaik,3 CSE A

Page 17: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

17

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Hot Solar Cells

By converting heat to focused beams of light, a new solar device could create cheap and continuous power.Solar panels cover a growing number of rooftops, but even decades after they were first developed, the slabs of silicon remain bulky, expensive, and inefficient. Fundamental limitations prevent these conventional photovoltaics from absorbing more than a fraction of the energy in sunlight.But a team of MIT scientists has built a different sort of solar energy device that uses inventive engineering and advances in materials science to capture far more of the sun’s energy. The trick is to first turn sunlight into heat and then convert it back into light, but now focused within the spectrum that solar cells can use. While various researchers have been working for years on so-called solar thermophotovoltaics, the MIT device is the first one to absorb more energy than its photovoltaic cell alone, demonstrating that the approach could dramatically increase efficiency.

Standard silicon solar cells mainly capture the visual light from violet to red. That and other factors mean that they can never turn more than around 32 percent of the energy in sunlight into electricity. The MIT device is still a crude prototype, operating at just 6.8 percent efficiency—but with various enhancements it could be roughly twice as efficient as conventional photovoltaics.

Hot Solar Cells

BreakthroughA solar power device that could theoretically double the efficiency of conventional solar cells.

Why It MattersThe new design could lead to inexpensive solar power that keeps working after the sun sets. The key step in creating the device was the development of something called an absorber-emitter. It essentially acts as a light funnel above the solar cells. The absorbing layer is built from solid black carbon nanotubes that capture all the energy in sunlight and convert most of it into heat. As temperatures reach around 1,000 °C, the adjacent emitting layer radiates that energy back out as light, now mostly narrowed to bands that the photovoltaic cells can absorb. The emitter is made from a photonic crystal, a structure that can be designed at the nanoscale to control which wavelengths of light flow through it. Another critical advance was the addition of a highly specialized optical filter that transmits the tailored light while reflecting nearly all the unusable photons back. This “photon recycling” produces more heat, which generates more of the light that the solar cell can absorb, improving the efficiency of the system.

Page 18: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

18

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Black carbon nanotubes sit on top of the absorber-emitter layer, collecting energy across the solar spectrum and converting it to heat.

The absorber-emitter layer is situated above an optical filter and photovoltaic cell, which is visible underneath.

….. Article Submitted By Akbar,3 CSE B

Page 19: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

19

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Paying with Your Face

Face-detecting systems in China now authorize payments, provide access to facilities, and track down criminals. Will other countries follow?

Availability: Now

Shortly after walking through the door at Face++, a Chinese startup valued at roughly a billion dollars, I see my face, unshaven and looking a bit jet-lagged, flash up on a large screen near the entrance. Having been added to a database, my face now provides automatic access to the building. It can also be used to monitor my movements through each room inside. As I tour the offices of Face++ (pronounced “face plus plus”), located in a suburb of Beijing, I see it appear on several more screens, automatically captured from countless angles by the company’s software. On one screen a video shows the software tracking 83 different points on my face simultaneously. It’s a little creepy, but undeniably impressive. Over the past few years, computers have become incredibly good at recognizing faces, and the technology is expanding quickly in China in the interest of both surveillance and convenience. Face recognition might transform everything from policing to the way people interact every day with banks, stores, and transportation services. Technology from Face++ is already being used in several popular apps. It is possible to transfer money through Alipay, a mobile payment app used by more than 120 million people in China, using only your face as credentials. Meanwhile, Didi, China’s dominant ride-hailing company, uses the Face++ software to let passengers confirm that the person behind the wheel is a legitimate driver. (A “liveness” test, designed to prevent anyone from duping the system with a photo, requires people being scanned to move their head or speak while the app scans them.) The technology figures to take off in China first because of the country’s attitudes toward surveillance and privacy. Unlike, say, the United States, China has a large centralized database of ID card photos. During my time at Face++, I saw how local governments are using its software to identify suspected criminals in video from surveillance cameras, which are omnipresent in the country. This is especially impressive—albeit somewhat dystopian—because the footage analyzed is far from perfect, and because mug shots or other images on file may be several years old. Facial recognition has existed for decades, but only now is it accurate enough to be used in secure financial transactions. The new versions use deep learning, an artificial-intelligence technique that is especially effective for image recognition because it makes a computer zero in on the facial features that will most reliably identify a person (see “10 Breakthrough Technologies 2013: Deep Learning”).

Page 20: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

20

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

“The face recognition market is huge,” says Shiliang Zhang, an assistant professor at Peking University who specializes in machine learning and image processing. Zhang heads a lab not far from the offices of Face++. When I arrived, his students were working away furiously in a dozen or so cubicles. “In China security is very important, and we also have lots of people,” he says. “Lots of companies are working on it.”

Employees simply show their face to gain entry to the company’s headquarters. One such company is Baidu, which operates China’s most popular search engine, along with other services. Baidu researchers have published papers showing that their software rivals most humans in its ability to recognize a face. In January, the company proved this by taking part in a TV show featuring people who are remarkably good at identifying adults from their baby photos. Baidu’s system outshined them. Face++ pinpoints 83 points on a face. The distance between them provides a means of identification. Now Baidu is developing a system that lets people pick up rail tickets by showing their face. The company is already working with the government of Wuzhen, a historic tourist destination, to provide access to many of its attractions without a ticket. This involves scanning tens of thousands of faces in a database to find a match, which Baidu says it can do with 99 percent accuracy. Jie Tang, an associate professor at Tsinghua University who advised the founders of Face++ as students, says the convenience of the technology is what appeals most to people in China. Some apartment complexes use facial recognition to provide access, and shops and restaurants are looking to the technology to make the customer experience smoother. Not only can he pay for things this way, he says, but the staff in some coffee shops are now alerted by a facial recognition system when he walks in: “They say, ‘Hello, Mr. Tang.’”

Page 21: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

21

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Will AI Ever Become Conscious?

When science-fiction worlds introduce robots that look and behave like people, sooner or later those

worlds' inhabitants confront the question of robot self-awareness. If a machine is built to truly mimic a

human, its "brain" must be complex enough not only to process information as ours does, but also to

achieve certain types of abstract thinking that make us human. This includes recognition of our "selves"

and our place in the world, a state known as consciousness.

One example of a sci-fi struggle to define AI consciousness is AMC's "Humans" (Tues. 10/9c starting June 5). At this point in the series, human-like machines called Synths have become self-aware; as they band together in communities to live independent lives and define who they are, they must also battle for acceptance and survival against the hostile humans who created and used them. Advertisement

But what exactly might "consciousness" mean for artificial intelligence (AI) in the real world, and how close is AI to reaching that goal? [Intelligent Machines to Space Colonies: 5 Sci-Fi Visions of the Future] Philosophers have described consciousness as having a unique sense of self coupled with an awareness of what's going on around you. And neuroscientists have offered their own perspective on how consciousness might be quantified, through analysis of a person's brain activity as it integrates and interprets sensory data. However, applying those rules to AI is tricky. In some ways, the processing abilities of AI are not unlike those that take place in human brains. Sophisticated AI systems use a process called deep learning to solve computational tasks quickly, using networks of layered algorithms that communicate with each other to solve more and more complex problems. It's a strategy very similar to that of our own brains, where information speeds across connections between neurons. In a neural network, deep learning enables AI to teach itself how to identify disease, win a strategy game against the best human player in the world, or write a pop song. But to accomplish these feats, any neural network still relies on a human programmer setting the tasks and selecting the data for it to learn from. Consciousness for AI would mean that neural networks could make those initial choices themselves, "deviating from the programmers' intentions and doing their own

Page 22: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

22

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

thing," Edith Elkind, a professor of computing science at the University of Oxford in the U.K., told Live Science in an email. In the third season of the AMC series "Humans," humanlike robots called Synths that have achieved self-

awareness struggle with the consequences.

"Machines will become conscious when they start to set their own goals and act according to these goals rather than do what they were programmed to do," Elkind said.

"This is different from autonomy: Even a fully autonomous car would still drive from A to B as told," she added.

One of the pitfalls for machines becoming self-aware is that consciousness in humans is not well-defined enough, which would make it difficult if not impossible for programmers to replicate such a state in algorithms for AI, researchers reported in a study published in October 2017 in the journal Science. The scientists defined three levels of human consciousness, based on the computation that happens in the brain. The first, which they labeled "C0," represents calculations that happen without our knowledge, such as during facial recognition, and most AI functions at this level, the scientists wrote in the study.

The second level, "C1," involves a so-called "global" awareness of information — in other words, actively sifting and evaluating quantities of data to make an informed, deliberate choice in response to specific circumstances.

Self-awareness emerges in the third level, "C2," in which individuals recognize and correct mistakes and investigate the unknown, the study authors reported.

"Once we can spell out in computational terms what the differences may be in humans between conscious and unconsciousness, coding that into computers may not be that hard," study co-author Hakwan Lau, a UCLA neuroscientist, previously told Live Science.

To a certain extent, some types of AI can evaluate their actions and correct them responsively — a component of the C2 level of human consciousness. But don't expect to meet self-aware AI anytime soon, Elkind said in the email.

"While we are quite close to having machines that can operate autonomously (self-driving cars, robots that can explore an unknown terrain, etc.), we are very far from having conscious machines," Elkind said.

So, for now, if you want to see "conscious" AI in action, you can watch the Synths vie for their rights in "Humans." The third season debuts June 5 at 10/9c.

….. Article Submitted By B Ashish,3 CSE

Page 23: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

23

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Here’s what AlphaGo’s historic win means for the enterprise

Deep learning–a machine learning technique based on artificial neural networks–is growing in popularity

due to a series of developments in the science and business of data mining.

Prior to AlphaGo’s victory over the currently best Go player Lee Sedol, computer programs that played Go

had only been able to beat average players. Indeed, the accomplishment can be seen as a major milestone

in developing computer programs that are on a par with and even exceed human levels of intelligent

behavior.

March 15, 2016 brought us a milestone in artificial intelligence 10 years earlier than experts expected:

AlphaGo, the AI-based computer created by GoogleDeepMind, beat world champion Go player Lee Sedol

at the game. Go is among the most ancient of games — simple in concept, yet spectacularly complex to

master. The final score in the five-game match was 4-1, but after AlphaGo took a 3-0 lead, it was clear we

were in a new era. Sedol himself said after the match, “I never imagined I would lose. It’s so shocking.”

Page 24: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

24

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

This may sound familiar, bringing you back to IBM’s Deep Blue beating chess champion Kasparov in

1997. But under the covers, AlphaGo is as different from Deep Blue as the DVDs introduced in 1997 are

from a Netflix movie download. Deep Blue’s strength came from brute force computing — literally

evaluating the likely result of each possible move. When playing Go, brute force search is not an option.

The number of possible plays is simply too vast, even in comparison to chess. There are about 10170 legal

plays on the Go board. To put this astounding number into perspective, our entire universe contains only

about 1080 atoms. This is why Go has been seen as the holy grail for artificial intelligence (AI) research.

Winning at Go is not about evaluating every possible move, it requires strategy — and according to Sedol

himself, AlphaGo’s strategy was “excellent.”

But enough about games. There are broader implications here. We can expect similar advances in

commercial applications, such as self-driving cars. Demis Hassabis, who heads Google’s machine learning

team, previously said, “The methods we’ve used are general-purpose; our hope is that one day they could

be extended to help us address some of society’s toughest and most pressing problems, from climate

modelling to complex disease analysis.”

These machine learning methods will also have significant impact on how we perform unstructured and

complex business processes and decision-making tasks in day-to-day work.

Businesses already use AI and machine learning to deliver millions of valuable recommendations and

observations every day. Well-known examples include product recommendations by Amazon, movie

recommendations from Netflix, and personalized search results from Google. In the enterprise, examples

include customer targeting, lead scoring, opportunity risk analysis, sales forecasting, and churn

prediction. So with AI already delivering daily business value and AlphaGo’s victory in the news, a natural

question for those of us in enterprise computing is, what should we expect next from AI and machine

learning in the enterprise?

What’s new in AlphaGo?

Page 25: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

25

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

What differentiates AlphaGo from previous technology is its learning capability. AlphaGo learns using two

complementary deep neural networks: One decides which moves are more promising (data scientists say

this “reduces the width of the search space”), the other learns an “intuition” about how likely it is a

potential play will result in a win (“reducing the depth of the search space”). These two networks learn —

or we could say are trained — first by analyzing many past matches played by professionals. This is

known as “learn by example” or “supervised learning.” Based on that foundation, AlphaGo then improves

by playing games against itself — millions of games at a dazzling speed most of us can barely imagine.

This self-play is known as “reinforcement learning.” If you remember the 1983 movie War Games, in

which a computer “decides” not to start World War III by playing out different scenarios at lightning

speed only to learn that every scenario results in world destruction, you’ve got an image of this sort of

self play. AlphaGo is not fed the winning patterns of Go. Instead, it abstracts and summarizes patterns

from actuallyplaying Go. In this way, AlphaGo is truly “intelligent” about playing the game.

What new capabilities can a learning system offer?

Not long ago, even the most advanced supercomputer could not keep up with a four-year-old child in

identifying cats in photos. No longer. With rapid advancement in AI, we are seeing breakthroughs on

many tasks considered formidable challenges for computers — not just object recognition in images, but

self-driving cars, question answering through natural language, composing newspaper articles, even

painting and drawing.

Replicating actions humans consider trivial is only the start. AI routinely considers options ignored by

human beings. For example, in AlphaGo’s first three games, it made “surprising” moves a typical human

professional would not consider. At the time, some observers identified the moves as mistakes. Yet 20

more steps proved these surprising moves to be brilliantly innovative tactics. I believe Go professionals

will study these moves and, in doing so, expand the set of options they consider in future human-only

championships. In this sense, AI is creative, helping humans achieve more.

Surely there are limitations? Absolutely.

Although research in AI started in the 1950s, true learning systems are still in their infancy. It’s true that

AlphaGo learned quickly. It took only five months for AlphaGo to move from beating a level-2 Go

professional to beating Sedol, a level-9 champion. That progress would take a talented human years. But

if we compare AlphaGo to that talented human, the path is quite different. AlphaGo was able to play tens

of millions of games in that five month span, where a person can only play 1,000 or so games per year. So

AlphaGo, and AI in general, is data inefficient in terms of learning. You may think this is a moot point

Page 26: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

26

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

because an AI system is capable of playing millions of games. But keep in mind that for many applications

outside gaming, self-play is not practical, making learning a significant hurdle. So it remains an area of

intense study for AI scientists.

Moreover, Go is a relatively simple task for AI because, even with its daunting set of options, it is well

defined. Each player has complete information about the state of the game, past moves, and available

future moves — no uncertainty. Contrast this with a game like bridge, where each player must make

guesses about unknown cards, or poker where a player’s ability to bluff adds new twists. And for games

like Go, each move is deterministic and the final rewards are explicit, either a win or loss. In the real

world, especially for many situations in an enterprise, only partial information is available, and the final

reward is difficult to quantify.

What is AlphaGo for the enterprise?

As I mentioned in a recent post, Data Science, Self-driving Applications, and the Rise of Conversational UI,

“self-driving” enterprise applications — able to seek out data, apply intelligence, and present findings in a

useful way — are the new frontier. With the addition of AI, many enterprise apps will act more like

human assistants. They will detect relevant context changes (location, target customer, timing) and

deliver relevant information at the moment it is most helpful. The interaction between a user and their

applications will be more natural, more like talking to a trusted human assistant than enduring endless

typing and clicking. And value grows over time as the AI analyzes the results of ongoing operations, such

as marketing campaigns, lead conversions, sales meetings, email flows, interactions with customer

success teams, or customer churn.

You may find yourself thinking, “Sure, if I had infinite time to pore over reports, I could see useful trends

too.” That, of course, is the point. AI allows tedious tasks to be handled by machines, allowing people to

concentrate on tasks better suited to us as humans. This brings us back to the point that AI systems, like

AlphaGo, excel where “codified rules” exist. Go options may be so numerous as to seem infinite, but the

rules of the game are clear. Even the 10170 possible moves do not include throwing off your opponent by

showing them pictures of your cat. Letting machines handle the tedium means people can make the leaps

of creative intuition still far out of reach for AI.

The real world requires the complementary nature of artificial intelligence and human intelligence. AI

thrives at computation, memorizing, and even reasoning as long as the problem space is constrained.

Human beings excel at perception, decision making, disruptive creativity, and interpersonal

relationships. Success in the enterprise requires so many mundane tasks: updating data records,

monitoring databases for changes, evaluating real-time results in marketing campaigns, detecting which

customers are likely to churn, etc. All of these are candidates for automation and, in particular, candidates

Page 27: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

27

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

for AI because they require a system able to learn the difference between critical observations and

irrelevant anomalies. As a result, human beings can focus on tasks requiring the unique spark of human

intelligence: creating an unprecedented campaign, meeting one-on-one to win over clients, or designing

the next generation of AI. Palintir, a company creating analysis software for U.S. government anti-

terrorism efforts and the financial industry, offers a sophisticated example, building what they call “man-

machine symbiosis.”

Before leaving the topic of human/AI teamwork, we should acknowledge that people and machines also

make mistakes. So the power of teaming is not just to reach new heights but also to reduce errors. The

protection machines can offer range from the trivial, such as an email system that warns you when

message content implies an attachment should be included, to the life saving, like the loud “stall” warning

in the cockpit of a commercial jet. And no one who has used voice recognition or auto-correct on their

phone needs to be reminded that computers make mistakes. Machines and humans must work together

to deliver optimal results.

As impressive as the AlphaGo victory is, we’re early in the development of AI systems. That means we’re

also early in understanding the best ways for people and AI systems to join forces. But just as AlphaGo

made innovative moves in Go that will ignite new thinking and creativity among the best Go players, we

are confident tomorrow’s AI will spark new innovation among those who embrace its value.

….. Article Submitted By Sunkari Anil,3 CSE C

Page 28: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

28

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Hands-free help from the Google Assistant.

Google Home is a brand of smart speakers developed by Google. The first device was announced in May

2016 and released in the United States in November 2016, with subsequent releases globally throughout

2017.

Google Home speakers enable users to speak voice commands to interact with services through

Google's intelligent personal assistant called Google Assistant. A large number of services, both in-house

and third-party, are integrated, allowing users to listen to music, control playback of videos or photos, or

receive news updates entirely by voice. Google Home devices also have integrated support for home

automation, letting users control smart home appliances with their voice. Multiple Google Home devices

can be placed in different rooms in a home for synchronized playback of music. An update in April 2017

brought multi-user support, allowing the device to distinguish between up to six people by voice. In May

2017, Google announced multiple updates to Google Home's functionality, including: free hands-free

phone calling in the United States and Canada; proactive updates ahead of scheduled events; visual

responses on mobile devices

or Chromecast-enabled televisions; Bluetooth audio streaming; and the ability to add reminders and

calendar appointments.

The original product has a cylindrical shape with colored status LEDs on the top for visual representation

of its status, and the cover over the base is modular, with different color options offered through Google

Store intended for the device to blend into the environment. In October 2017, Google announced two

additions to the product lineup, the Google Home Mini and the Google Home Max.

Page 29: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

29

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Why is this happening now in particular, considering neural networks have been around for at least 50

years?

Some history of deep learning

Researchers like Frank Rosenblatt created one of the first artificial neural networks inspired by findings

in neuroscience from the 1940s. Rosenblatt developed the so-called “perceptron” that can learn from a

set of input data similar to how biological neurons learn from stimuli.

An artificial neuron consists of a set of input weights similar to the dendrites of a neuron. The biological

neuron processes the electronic input charges and produces an output that is channeled through the axon

that again is connected to other neurons.

Page 30: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

30

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Figure 1: A biological neuron consists of dendrites, the nucleus and the axon that transmits electrical

impulses to other neurons.

Figure 2: An artificial neuron can be seen as an abstraction of a biological neuron.

Interest in this research, however, was dampened when Marvin Minsky and Seymor Papert published a

book on perceptrons that highlighted various shortcomings. They showed, for example, that a single

artificial neuron cannot model the logical operator XOR (the output value of this operator is only true if

the input values differ, also known as exclusive or).

More complex neural networks with dense connectivity between the neurons are able to model those

Boolean operators (operators that connect one or two sentences resulting in only two possible values:

true or false), as was already known at the time. But their findings still had a chilling effect on the

research community and strengthened the so-called symbolic Artificial Intelligence approaches that

relied more on rule-based systems.

Despite this climate, a few researchers continued to develop these techniques in the 1980s and 1990s,

and most current algorithms are based on those networks. Researchers likeGeoffrey Hinton, professor at

Page 31: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

31

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Toronto University and employee at Google, and Yann LeCun, director of Facebook AI research, persisted

in their research in this area despite it being generally not recommended to grad students at the time.

Geoffrey Hinton

At the beginning of the new millennium, a perfect storm developed and neural networks suddenly are

back in the limelight.

What has fueled this development?

Some early deep learning models and algorithms were introduced as early as the 1990s [1,2], but there

were two major barriers in their development.

First, the models depended on copious amounts of training data to perform well. An image recognition

system would require millions of labeled or captioned images, which were previously only available

through small, digitized collections of books and news articles. The advent of Web 2.0 and social media

platforms such as flickr, instagram and facebook supplied these models with the scale and scope of data

they needed.

Yann LeCun

Second, deep networks often require immense processing power. The advent of powerful processing

technologies (GPUs) and architectural paradigms that facilitate distributed processing (cluster/cloud

computing) allowed deep learning models to thrive. These developments, in turn, led to advancements in

deep learning algorithms. Since the early 2010s, several new models and network configurations have

emerged.

Page 32: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

32

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 1

Recent breakthroughs in deep learning

Deep learning uses the power of abstraction to derive meaning from data. When processing a collection

of portraits, for instance, instead of focusing on individual pixels, a deep network identifies recurring

patterns such as eyes, noses, the silhouette of the face, etc.

Each layer of the network contributes to constructing these abstractions from data (see image below). As

a result, image processing has benefited from the advent of deep learning models.

Figure 3: Each layer in a network recognizes a different aspect of the image.

Source: Deep learning: a bird’s-eye view, by R. Pieters, 2015, pp. 58, 62.

Text mining is another area of success. Automated language translation systems often tried to align

source text with target text using a word-to-word or phrase-to-phrase mapping. Deep models, on the

other hand, are able to recognize one-to-many and many-to-one relations between source and target

languages.

Figure 4: Deep networks are able to translate one language to another by recognizing sophisticated

alignments.

Source: Deep NLP by Richard Socher, 2015, pp. 12-14.

There are many further areas of success in applying deep learning to data mining. Google

famously improved its voice transcription service thanks to a deep network that could recognize

complicated speech patterns.

….. Article Submitted By Jallu Ravi Kiran,3 CSE B

Page 33: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

33

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 2

….. Article Submitted By Harish J,3 CSE A

….. Article Submitted By Rajitha V J,3 CSE C

ART GALLERY

Page 34: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

34

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 2

….. Article Submitted By Rishitha V,3 CSE C

….. Article Submitted By Rishitha V,3 CSE C

FUNTIME

What is fast, loud and crunchy?

A rocket chip!

Why did the teddy bear say no to dessert?

Because she was stuffed.

What has ears but cannot hear?

A cornfield.

What did the left eye say to the right eye?

Between us, something smells!

What do you get when you cross a vampire and a snowman?

Frost bite!

Page 35: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

35

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 2

What did one plate say to the other plate?

Dinner is on me!

Why did the student eat his homework?

Because the teacher told him it was a piece of cake! ….. Article Submitted By R Rahul,3 CSE C

SUDOKU

ANS

….. Article Submitted By Ram Goapal,2 CSE A

Page 36: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

36

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 2

November 2017

3 Students G Sai Chandu, G Chandra Sekhar & Aizaz, Final CSE Student had undergone

Interview Process of Vivlex and selected.

3 Students Benkili Sai Kiran, Kum Kum & Anusha P, Final CSE Student had undergone

Interview Process of Sphereme and selected.

G Sai Charan, Final CSE Student had undergone Interview Process of Just Dial in

Campus Recruitment Drive and is selected.

3 Students R Harish , D Srikanth & K L Deedepya, Final CSE Student had undergone

Interview Process of Apps Associates and selected.

D Srikanth, Final CSE Student had undergone Interview Process of Efftronics in Campus

Recruitment Drive and is selected.

Bangaru Satyanarayana & Surya of Final CSE had attended Powerlifting Competition

October 2017

81 Students participated in the Coding Contest , where 48 members qualified and got the

eligibility for Round 2 under Capgemini; 18 Members Cleared Round 2 as of Now and

awaiting for the Round 3.

Jayashree III , CSE B Student Scored 100/100 in Capgemini Coding Test Round 1

conducted on 07-10-2017.

6 Students D Akilesh Rajana, Surisetty Sai Kumar, Paradesi R, Runku Harish, V Sai

Manoj & T Harsghawardhan are selected from CSE, GMRIT for the Hands-on Workshop

on AI during the period of October 13th & 14th, 2017

Andhavarapu Meghana(509), Final CSE Student had undergone Interview Process of

Hyundai Mobies and is finally selected.

Andhavarapu Meghana(512), Final CSE Student had undergone Interview Process of

Socotronics and is finally selected.

Five Students, Final CSE Student had undergone Interview Process of PCS Tech and is

finally selected.

Students Highlights

Page 37: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

37

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 2

Harsha Wardhini, Final CSE Student had undergone Off Campus Interview Process of

Reliance JIO and is finally selected.

Three Students S Sai Kumar, D Srikanth & T Chitti Sai Santosh of Final CSE had

cleared Round 1 Neural hack and Gave the Round 2 Exam on 27th October 2017under the

surveillance of Web Cam

September 2017

P Ratnakar (5c7) , Final CSE Student was selected in Codevita 2017 and finally placed

in TCS

Andhavarapu Meghana(509), Final CSE Student had undergone Interview Process of

AVISO and is finally selected.

M Satya Srikanth & K Shiva Harsha, Final CSE students published a paper entitled”

Noteless Transactions using RFID” in the International Journal for Scientific Research &

Development.

Ravi Kumar Palo, Bharath M, Karthik M & Md Shanavaz qualified Googles Codejam

Kickstart Round F Conducted on 24-09-2017; scored 42 out of 80 and obtained the

eligibility to participate in further rounds.

31 Students of Final CSE, successfully cleared Round 1 of M/s CGI.

Third Year students 10 members are nominated for Talent Management & Skill

Development Committee on Artificial Intelligence a Special Gesture Talk in Platinum

Jubilee Guest House, Andhra University.

Guest Lecture is conducted by an Industry Expert Subrahmanyam Addagalla, TCS for

Final year students 147 Students participated.

Kusa Raju & D Bharath, Micro Soft Student Partners of Third CSE had organized a

Seminar on MicroSoft Azure

DBS Campuahack17, Total CSE Registrations 161.

Celebrations of Teachers Day on Sept 5th 2017 organized by Final Years.

Celebrations of Engineers Day on Sep 15th 2017 organized by Third Years.

August 2017

Page 38: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

38

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 2

P Ratnakar (5c7) , Final CSE Student was selected in Codevita 2017 for Interview

Process and he will attend the drive on Aug 19th

Jallu Kiran (561), Pre Final Cse Student was selected in Codevita 2017 Round II and

awaiting for the Final Call for Interview

J Nikhil & Runku Harish Final CSE Student Submitted a Project entitled “A Novel Image

Ianalysis Framework model for Classification of Malaria Parasites using Machine

Learnuing Techniques” for AICTE I3 2017

Ravi Kuamr Palo , Bharath & Karthik Submitted a project entitled “A Smart IOT Based

IOT Tracking System using deep learning framework for public private organization” for

AICTE I3 2017

Skill GMRIT a program launched with 59 Students among Pre Final Years as a part of

Problem Solving Skills

Skill GMRIT Night Sessions Started in the Month of August 21st 2017, to enhance and to

takeover the skills of students ( It’s a combined practice of all branches anchored by CSE

Students)

Dinesh & Santosh , Pre Final CSE Students won First Prize in “ Step UP Dance

Competition” in GMR densed Group.

122 Volunteer Registrations for Rookies & Formac Placement related Modest Exams.

An ISTE event is conducted by Dept of CSE by III Year & IInd Year Students “ Word

Mania”

CSI Students Chapter conducted a Workshop on App Development for IIrd year

Students

July 2017

Our Final B.Tech Alumni relived in 2017, CSE Mr. Rama Harsha Kuncham invited by

Facebook Ltd, Califonia for his project IMUX

Page 39: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

39

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 2

Our Final B.Tech Alumni relived in 2017, Kadheer Basha is in IBM as a developer

Manager from July 2017. Highest Participation in Codevita 2017 ( 267 Members

Participated from CSE), 100% participation

33 of our students doing FSI in various software industries

Three of our 3rd year students started Startup in Visakhapatnam in the field of Digital

Marketing ( Mr.P.Charanrajh, Mr.P.V.Sathvik and Mr.P.V.Vamsi kiran)

59 Students of Final CSE registed for IBM Big Data Analytics with Hadoop as a Industry

driven Course on their own interest.

64 Students from Prefinal CSE registered for Moocs Courses on their own interest

140 students registered for one credit course which is purely industry driven,

TMAXSOFT.

Conduction of EC/CC Activity-1 to understand the premiere use of EC/CC under the

guidance of faculty by

Student Coordinators of ISTE and CSI

Page 40: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

40

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 2

November 2017

Visit to IIHT, Visakhapatnam by Dr A V Ramana & Dr V Prasad

Visit to ESG Infor Tech, VSEZ, Duvvada by Dr A V Ramana & Dr V Prasad

Dr R Priya Vaijayanthi & Dr V Prasad are certified as Licensed Trainers for the School of

Design Thinking by 8012 FinTech Ltd, Chennai

Sri V Srinadh attended the 5 Day Workshop in MVGR on Networks and Cryptography

Dr A V Ramana had a guest lecture in SISTAM Engg College, Srikakulam

Sri D Siva Krishna & Sri CH Koteswara Rao successfully completed 4 Weeks Internship

Program in Indian Institute of Hardware Technology, Visakhapatnam

7 Faculty Members from CSE are nominated for 2 Days Workshop on “Effective

Integration of ICT Tool in Teaching & Learning”

N Lakshmi Devi & K Srividhya had a punlication in International Journal of Network

Communication and Emerging Technologies

October 2017

Dr V Prasad & Mrs K Srividhya Certified with their NPTEL Online Courses.

Acceptance of Internship to Faculty of CSE by Indian Institute of Hardware Training,

Visakhapatnam.

Sri D Siva Krishna & Sri Ch Koteswara Rao Qualified AURCET conducted by Andhra

University in 2017.

Ms G Neelima has a publication in Scopus Indexed Journal in the month of October.

September 2017

Dr V Prasad, completed his NPTEL Online exam on the course “Fundamentals of

Database”

Mrs K Sri Vidhya, completed his NPTEL Online exam on the course “Fundamentals of

Database”

Faculty Highlights

Page 41: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

41

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 2

Ch Chakradhara Rao, P Muralidhara Rao, Ch Koteswara Rao, D Siva Krishna, J Bharath

Kumar, N Lakshmi Devi and V Mahalakshmi had given their best Andhra University in

AURCET’2017 and awaiting for the Results

Dr V Sree Rama Murthy had headed the session Micro Soft Azure as a part of Micro Soft

Student Partner with a Student Croud of 112 Students in 1-lab.

August 2017

Dr Deebak had steered two teams towards AICTE India Innovation Initiative ( I3) 2017

and made the submission successful.

A paper is Accepted in Proceedings of the National Academy of Sciences, India Section

A: Physical Sciences, SCIE Indexed, submission made by Dr Deebak.

Skill GMRIT, a team combined with IT & ECE evolved itself in the Month of August 2017

July 2017

Contest towards Innovative Grant of 34.6 Lakh is received from DST-TIDE , the

Principal Investigator is Dr V Sreerama Murthy

Springer publications for the month of July by Dr V Prasad & Mr K Lakshmana Rao

Springer Conference publication by Mr D Siva Krishna, Mr CH Koteswara Rao & Mr

P Muralidhara Rao

Scopus paper accepted for publication for Mr M Ramachandra

Springer Conference publication by Dr V Prasad in Data Engineering & Intelligence

Computing

Award of PhD Degree to our Faculty V Prasad

Internal Promotions of our Faculty Dr V Prasad, Smt I Srilakshmi & Sri K Koteswara

Rao

M Tech new course titled Computer Science in Cyber Security is introduced from

current academic year (Curriculum and syllabus is successfully set)

Page 42: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

42

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 2

Submission of PhD Thesis by Sri M Balajee

Our Faculty Dr B D Deebak sent DST proposal.

Our faculty as a Team motivated all Final and Third year students towards TCS

Codevita, 2017

Conduction of 3 Guest Lectures & 1 Workshop, to deploy the activities towards

Industry needs.

Successful completion of 8Th External BOS and finalized 5th and 6th sem Curriculum of AR-

16

Successful completion of ISO external surveillance Audit on 17-7-2017

Sri K Lakshmana Rao completed CEH course in cyber security conducted by EC Council

Dr V Prasad, registered for an Online Course “Databases” under NPTEL an official

Registration

Conducted MDP by Dr.A.V.Ramana, Dr.VSR Murty and Dr.Priya vaijayanthi to all CSE faculty

members and staff

Page 43: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

43

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 2

#SKILL GMRIT SESSIONS INITITIATION

# Dr V Prasad, receive of PhD Degree

CSE in NEWS

Page 44: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

44

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 2

# JAN 1ST WISHES AND GROUP PHOTOGRAPH WITH PRINCIPAL, GMRIT

# Faculties Seminar

Page 45: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

45

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 2

# NPTEL WORKSHOP, CSE HOD AS NPTEL SPOC

# CSE SPORTS CHAMPIONS

Page 46: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

46

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 2

# STECONE 2017 WORKING STILLS

# TEACHERS DAY CELEBRATIONS

Page 47: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

47

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 2

# CODEVITA BABU

# FACULTY SUMMER INTERNSHIP PROGRAMME LAST DAY SESSION

Page 48: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

48

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 2

# DESIGN THINKING WORKSHOP IN FINTECH 8012

# CSE POWER @ UNIVERSITY CAMPUS

Page 49: Department of Computer Science & Engineering · 3 Department of Computer Science & Engineering Academic Year : 2017-18 Volume 3, Issue 1 GMR Institute of Technology, Rajam – 532

49

Department of Computer Science & Engineering

Academic Year : 2017-18 Volume 3, Issue 2