Code No. Subject Semester
No.
18ITU28
OPEN SOURCE TOOLS
(For the students admitted in the academic year 2016-2017
and onwards)
VI
Objective: Emphasize usability and a just works philosophy in default configurations andfeature designs.
Text Book: 1. James Lee and Brent Ware: “Open Source Web Development with LAMP using Linux, Apache, MySQL,
Perl
and PHP", Dorling Kindersley(India) Pvt. Ltd, 2008.
2. Eric Rosebrock, Eric Filson: "Setting up LAMP: Getting Linux, Apache, MySQL and PHP and working
Together", Published by John Wiley and Sons, 2004.
Reference Books: # Dacie Cristian- “Pack Pub AJAX and PHP” - 2006.
# Scouarnec Yann- Stolz Jeremy Jeremy and Glass Michael - “Beginning PHP5- APACHE-
MYSQL Web Development” - Wiley-India. New Delhi- 2005.
Christopher Diggins-” Linux Unwired”- Shroff Publishers & Distributors Pvt. Ltd-2004
Unit No. Topics Hours
Unit I
Introduction to open source Open source Introduction: Open Source – Open source vs. Commercial Software – What is Linux? – Free Software – Where I can use Linux? Linux Kernel – Linux Distributions
14
Unit II
Linux operating system Linux Introduction: Linux Essential Commands – File system Concept – Standard Files – Vi Editor – Partitions creation – Shell Introduction – String Processing – Installing Application
15
Unit III
Open Source Web Servers
Open Source Web Servers: Installation, Configuration and
administration of Apache, Nginx. Open Source Tools,
IDE,RDBMS: Eclipse IDE, OpenStack cloud technology,
Version Control Systems, GIT, CVS, Open Source
Repositories: GitHub, SourceForge, Google Code, Open
Source RDBMS:MYSQL basics, installation an d usage,
PostgreSQL, NoSQL, MongoDB, Hadoop
15
Unit IV
MY SQL Introduction to MY SQL – The Show Databases and Table – The USE command – Create Database and Tables – Describe Table – Select, Insert, Update and Delete statement – Some Administrative detail – Table Joins – Loading and Dumping a Database
15
Unit V
Server script
Introduction : General Syntactic Characteristics – PHP
Scripting – Commenting your code – Primitives , Operations
and Expressions – PHP Variables – Operations and
Expressions Control Statement – Array – Functions – Basic
Form Processing – File and Folder Access – Cookies –
Sessions – Database Access with PHP – MYSQL – MYSQL
Functions – Inserting Records – Selecting Records – Deleting
Records – Update Records
13
UNIT-I
An Introduction to Open Source Software Open source software is free software for your business or personal use. Open source
developers freely share their knowledge and make the source code available to the public.
The software is distributed with a license which allows other developers can modify it
and/or add to it. Some examples of open source software are: WordPress, Ubuntu, and
Mozilla, creators of the Firefox browser.
Advantages • Open source software allows you to make choices • Open source software is under constant development which addresses vulnerabilities,
bug fixes, enhancements, and more. • You can modify the software as necessary for your own purposes. • Some open source programs give the user the option of automatic updates, which keep
the software current. i.e. WordPress. • A number of open source programs have a core application which can be enhanced by
the use of plug-ins and themes. i.e. WordPress and Joomla. • Open source software offers a tremendous amount of flexibility. • Open source software is potentially more secure than commercial programs because
the code is constantly being scrutinized by many programmers, not just a select few. • Many open source programs can be installed on your computer, unlike a proprietary
system which you can use, but where you have no control. If the software owner doesn’t like
what you are doing they can wipe out your hard work overnight. Disadvantages • If you don´t know how to write code , you have to pay for modifications or learn how
to code yourself. • If the author of a product no longer supports the software you might be out of luck unless
the development is picked up by other programmers You can take over development
yourself if you wish. • Open source is sometimes referred to as ‘open wallet’ in the sense that it may cost
you more to have open source code modified than it would cost you to buy a commercial
program.
• Unless there’s a structure in place to ensure the quality of the code it might wind up
with many changes, bug fixes, and patches, all of which can make the code more complex
and/or degrade the quality, which in turn leads to more maintenance. • The software might not be well-documented, which could make it difficult to learn.
• Vulnerabilities in the software can be exploited by hackers. Another option is to make
sure you have backups. This will save you a lot of time if you need to restore the site.
Difference Between Commercial and
Open Source S.No. Commercial Open source
1 Commercial systems are created and Open source systems are overseen by
supported by for-profit companies dedicated communities of developers who
(e.g., Microsoft) that typically sell contribute modifications to improve the
licenses for the use of their software product continually and who decide on the
and that are driven by maximizing course of the software based on the needs of
profits. the community.
2 Commercial software, on the other Open source software is generally free or has
hand, requires purchasing a license. low-cost licensing options.
The up-front license cost of a
commercial CMS could run from a
few thousand dollars to tens or even
hundreds of thousands.
3 Commercial or proprietary software While open source solutions are supported by
also equates to better support and communities of volunteers, your initial cost
typically offers a robust suite of may be lower with this choice, but you will
features right out of the box. If your most likely need to budget for technical
organization’s needs are very well resources to maintain it over time. With a
planned and documented, your IT limited budget, however, your financial
favors Microsoft products and resources are better directed toward the best
commercially supported software, possible website as opposed to acquiring
and the up-front budget for software licenses and paying mandatory fees for
licensing is not a significant concern, updates.
then Microsoft web stack and
commercial CMS may be a good
option for you.
4 It includes a lot of extra features in It provides a full package in the open source
our commercial version. So you will version but include a more limiting license.
find that the commercial version
does more (printing, text search,
extraction) and it does things better
5 They get money via They get money via
Product Sales 1. Consulting Sales
2. Support Contracts
Product Licenses 3. SaaS - Software as a service / Hosting
Product Renewals 4.
Donations
SaaS - Software as service /
Hosting
Consulting Sales
Support Contracts
Venture Capital
6 They market via They market via
Sales Team 1. Search Engine
Marketing Team 2. Word of Mouth - Viral Marketing
Advertising Dollars 3.
Case Studies
Search Engine
Word of Mouth - Viral
Marketing
Case Studies
What is Linux?
Linux is the best-known and most-used open source operating system. As an
operating system, Linux is software that sits underneath all of the other software on a
computer, receiving requests from those programs and relaying these requests to the
computer’s hardware. Linux is an Open Source version of Unix developed by Linus Torvalds in 1991 to port Unix to the
Intel x86 processor .This made Unix available on the most ubiquitous computer hardware that has ever existed, and therefore available to almost everyone.
Linux has since been ported to almost every processor and function one could imagine, including game-boxes, personal digital assistants (PDAs), personal digital video recorders, and IBM mainframes, expanding the original concept of Unix for x86 to Unix for everything.
For the purposes of this page, we use the term “Linux” to refer to the Linux kernel, but
also the set of programs, tools, and services that are typically bundled together with the
Linux kernel to provide all of the necessary components of a fully functional operating
system.
Some people, particularly members of the Free Software Foundation, refer to this
collection as GNU/Linux, because many of the tools included are GNU components.
However, not all Linux installations use GNU components as a part of their operating
system. Android, for example, uses a Linux kernel but relies very little on GNU tools. Linux has a graphical interface, and types of software you are accustomed to using on
other operating systems, such as word processing applications, have Linux equivalents.
Linux is different from other operating systems in many important ways. Linux is open
source software. The code used to create Linux is free and available to the public to
view, edit, and for users with the appropriate skills—to contribute to.
Linux is also different in that, although the core pieces of the Linux operating system are
generally common, there are many distributions of Linux, which include different
software options. This means that Linux is incredibly customizable, because not just
applications, such as word processors and web browsers, can be swapped out. Linux
users also can choose core components, such as which system displays graphics, and
other user-interface components.
Usage of Linux
Companies and individuals choose Linux for their servers because it is secure, and you
can receive excellent support from a large community of users, in addition to companies
like Canonical, SUSE, and Red Hat, which offer commercial support.
Many of the devices you own probably, such as Android phones, digital storage devices,
personal video recorders, cameras, wearables, and more, also run Linux. Even your car
has Linux running under the hood.
Owners of Linux
Trademark on the name “Linux” rests with its creator, Linus Torvalds. The source code
for Linux is under copyright by its many individual authors, and licensed under the
GPLv2 license.
Contribution to Linux
Linux community is much more than the kernel, and needs contributions from lots of other
people besides programmers. Every distribution contains hundreds or thousands of
programs that can be distributed along with it, and each of these programs, as well as the
distribution itself, need a variety of people and skill sets to make them successful, including:
Testers to make sure everything works on different configurations of hardware and
software, and to report the bugs when it does not.
Designers to create user interfaces and graphics distributed with various programs.
Writers who can create documentation, how-tos, and other important text distributed
with software.
Translators to take programs and documentation from their native languages and
make them accessible to people around the world.
Packagers to take software programs and put all the parts together to make sure they
run flawlessly in different distributions. Evangelists to spread the word about Linux and open source in general. And of course developers to write the software itself.
Advantages and Benefits of Linux
One of the significant benefits of open source software such as Linux is that
because it has no owner, it can be debugged without resource to a license owner or
software proprietor.
The major advantage of Linux is its cost: the core OS is free, while many software
applications also come with a GNU General public License. It can also be used
simultaneously by large numbers of users without slowing down or freezing and it
is very fast.
It is an excellent networking platform and performs at optimum efficiency even
with little available hard disk space.
Linux also runs on a wide range of hardware types, including PCs, Macs,
mainframes, supercomputers, some cell phones and industrial robots. Some prefer to
dual-boot Linux and Windows while others prefer Linux and Mac OS. System76
machines come pre-installed with Linux in the form of Ubuntu, a Debian distribution
of Linux. This is the most popular distribution of Linux for laptops
Benefits and advantages of Linux over other
operating systems
It is free to use and distribute. Support is free through online help sites, blogs and forums. It is very reliable – more so than most other operating systems with very few crashes. A huge amount of free open source software has been developed for it. It is very resistant to malware such as spyware, adware and viruses.
It runs in a wide variety of machines than cannot be updated to use newer
Windows versions.
Since the source code is visible, ‘backdoors’ are easily spotted, so Linux offers
greater security for sensitive applications.
Linux offers a high degree of flexibility of configuration, and
significant customization is possible without modifying the source code.
Free Softwares
Linux is built with a collaborative development model. The operating system and most
of its software are created by volunteers and employees of companies, governments and
organisations from all over the world. The operating system is free to use and everyone
has the freedom to contribute to its development. This co-operative development model
means that everyone can benefit.
Because of this, we like to call it Free Software, or Socially Responsible Software.
Closely related is the concept of Open Source Software. Together, Free and Open
Source Software is collectively abbreviated as FOSS.
Transparency of the code and development process means that it can be participated
in and audited at all levels.
Linux has many other benefits, including speed, security and stability. It is renowned
for its ability to run well on more modest hardware. Hence, viruses, worms, spyware and
adware are basically a non-issue on Linux.
Many FOSS developers develop for fun; many others are paid for their time. Because
the code is open, it is actively worked on by all sorts of individuals and organisations.
Since development is shared, it can cost relatively little to work with FOSS.
When access to the source code is available, there are essentially no limitations to what
can be achieved. Free Software is so named because of the freedom granted to the user.
FOSS allows people and organisations to do what they want with the computers that
they own, without being beholden to any company. They can make whatever
modifications that they wish, providing unparalleled flexibility.
Many groups in the government, business and education sectors use Linux as a means of
cutting costs. It also allows them to create products that they would not otherwise
be able to make.
Schools both nationally and internationally are seeing the benefits of FOSS. There is a
vast wealth of free software designed for children of all ages, including educational
programmes and games. Education is all about imparting knowledge in an open fashion.
Jimmy Wales, founder and leader of the Wikipedia project, explains, free
knowledge cannot exist unless the tools used to manage it are also free.
There are over 30 million users of Linux, and that number is growing rapidly. The
Mozilla Firefox Web browser is the most popular Web browser and other open source
based Web browsers such as Chrome, Safari and Konqueror.
Linux and FOSS are major players in most other markets. The amazing flexibility
and scalability of the software means that Linux can be found in computers both large
and small.
1. Linux powers over 85 per cent of the top 500 supercomputers in the world,
while also scaling down to run on one quarter of new smartphones.
2. Over 95 per cent of the servers and desktops at large animation and
visual effects companies use Linux.
3. Linux drives over half of all Web servers, including 8 of the 10 most reliable
hosting providers. The Apache Web server, a flagship example of FOSS, propels
over 60 per cent of Web sites, including 44 per cent of secure (SSL) sites.
4. The One Laptop Per Child programme, a unique and ambitious collaboration
between the United Nations and a multitude of governments, companies and
other organisations worldwide. Linux distributions tend to have their own support resources as well:
for Ubuntu (and derivatives like Kubuntu, Xubuntu, Edubuntu, etc.) Canonical's free technical support page The Ubuntu Community Community documentation Unofficial Ubuntu Guide
for Fedora Fedora documentation Communicating and Getting Help
Unofficial Fedora Guide
for other distributions, consult the Web site of the project, as well as its DistroWatch page
Free Open Source Software
(FOSS), also called just Open Source or Free Software, is licensed to be free to use, modify, and distribute. Most FOSS licenses also include a kind of legal Golden Rule, requiring any changes - such as fixes and enhancements - be released under the same license. This creates the trust in developers and users that generates large, sustainable communities that grow the software over time
The term free software refers to a lack of restrictions on individual users as well as zero
cost; the term open source software refers to collaborative or networked development.
FOSS, which embraces the benefits and adherents of both paradigms, is gaining widespread
acceptance as traditional modes of software design are challenged.
The increasing popularity of FOSS has led to frustration in some circles for at least
three reasons:
Conventional software developers, distributors and sellers fear that FOSS
will undercut their profits.
Abuse of FOSS privileges may lead to questionable claims of copyright or
trademark protection, thereby spawning litigation.
The monetary value of FOSS is unclear, so governments have trouble figuring
out how to tax it.
Why “Free Software” is better than “Open Source”
This article has been superseded by a major rewrite, “Open Source” misses the point of Free Software, which is much better. We keep this version for
historical reasons.
While free software by any other name would give you the same freedom, it makes a
big difference which name we use: different words convey different ideas.
In 1998, some of the people in the free software community began using the term “open
source software” instead of “free software” to describe what they do. The term “open source”
quickly became associated with a different approach, a different philosophy, different values,
and even a different criterion for which licenses are acceptable. The Free Software
movement and the Open Source movement are today separate movements with different
views and goals, although we can and do work together on some practical projects. The fundamental difference between the two movements is in their values, their ways of
looking at the world. For the Open Source movement, the issue of whether software should
be open source is a practical question, not an ethical one. As one person put it, “Open
source is a development methodology; free software is a social movement.” For the Open
Source movement, non-free software is a suboptimal solution. For the Free Software
movement, non-free software is a social problem and free software is the solution.
Relationship between the Free Software movement and Open Source
movement The Free Software movement and the Open Source movement are like two political
camps within the free software community. Radical groups in the 1960s developed a reputation for factionalism: organizations split because
of disagreements on details of strategy, and then treated each other as enemies. Or at
least, such is the image people have of them, whether or not it was true.
The relationship between the Free Software movement and the Open Source movement is
just the opposite of that picture. We disagree on the basic principles, but agree more or less
on the practical recommendations. So we can and do work together on many specific
projects. We don't think of the Open Source movement as an enemy. The enemy is
proprietary software.
We are not against the Open Source movement, but we don't want to be lumped in with
them. We acknowledge that they have contributed to our community, but we created this
community, and we want people to know this. We want people to associate our achievements
with our values and our philosophy, not with theirs. We want to be heard, not obscured
behind a group with different views. To prevent people from thinking we are part of them,
we take pains to avoid using the word “open” to describe free software, or its contrary,
“closed”, in talking about non-free software.
So please mention the Free Software movement when you talk about the work we have
done, and the software we have developed—such as the GNU/Linux operating system.
Comparing the two terms
This rest of this article compares the two terms “free software” and “open source”. It
shows why the term “open source” does not solve any problems, and in fact creates some.
Ambiguity
The term “free software” has an ambiguity problem: an unintended meaning, “Software you can
get for zero price,” fits the term just as well as the intended meaning, “software which gives the
user certain freedoms.” We address this problem by publishing a more precise definition of free
software, but this is not a perfect solution; it cannot completely eliminate the problem. An
unambiguously correct term would be better, if it didn't have other problems.
Unfortunately, all the alternatives in English have problems of their own. We've looked at
many alternatives that people have suggested, but none is so clearly “right” that switching to
it would be a good idea. Every proposed replacement for “free software” has a similar kind
of semantic problem, or worse—and this includes “open source software.”
The official definition of “open source software,” as published by the Open Source Initiative,
is very close to our definition of free software; however, it is a little looser in some respects,
and they have accepted a few licenses that we consider unacceptably restrictive of the users.
However, the obvious meaning for the expression “open source software” is “You can look
at the source code.” This is a much weaker criterion than free software; it includes free
software, but also some proprietary programs, including Xv, and Qt under its original license
(before the QPL).
That obvious meaning for “open source” is not the meaning that its advocates intend. The
result is that most people misunderstand what those advocates are advocating. Here is
how writer Neal Stephenson defined “open source”:
Linux is “open source” software meaning, simply, that anyone can get copies of its source code files.
I don't think he deliberately sought to reject or dispute the “official” definition. I think he
simply applied the conventions of the English language to come up with a meaning for
the term. The state of Kansas published a similar definition:
Make use of open-source software (OSS). OSS is software for which the source
code is freely and publicly available, though the specific licensing agreements vary as to what one is allowed to do with that code.
Of course, the open source people have tried to deal with this by publishing a
precise definition for the term, just as we have done for “free software.”
But the explanation for “free software” is simple—a person who has grasped the idea of
“free speech, not free beer” will not get it wrong again. There is no such succinct way to
explain the official meaning of “open source” and show clearly why the natural definition is
the wrong one.
Fear of Freedom
The main argument for the term “open source software” is that “free software” makes some
people uneasy. That's true: talking about freedom, about ethical issues, about
responsibilities as well as convenience, is asking people to think about things they might
rather ignore. This can trigger discomfort, and some people may reject the idea for that. It
does not follow that society would be better off if we stop talking about these things.
Years ago, free software developers noticed this discomfort reaction, and some started
exploring an approach for avoiding it. They figured that by keeping quiet about ethics
and freedom, and talking only about the immediate practical benefits of certain free
software, they might be able to “sell” the software more effectively to certain users,
especially business. The term “open source” is offered as a way of doing more of this—a
way to be “more acceptable to business.” The views and values of the Open Source
movement stem from this decision.
This approach has proved effective, in its own terms. Today many people are switching to
free software for purely practical reasons. That is good, as far as it goes, but that isn't all
we need to do! Attracting users to free software is not the whole job, just the first step.
Sooner or later these users will be invited to switch back to proprietary software for some
practical advantage. Countless companies seek to offer such temptation, and why would
users decline? Only if they have learned to value the freedom free software gives them, for its
own sake. It is up to us to spread this idea—and in order to do that, we have to talk about
freedom. A certain amount of the “keep quiet” approach to business can be useful for the
community, but we must have plenty of freedom talk too.
At present, we have plenty of “keep quiet”, but not enough freedom talk. Most people involved
with free software say little about freedom—usually because they seek to be “more
acceptable to business.” Software distributors especially show this pattern. Some GNU/Linux
operating system distributions add proprietary packages to the basic free system, and they invite
users to consider this an advantage, rather than a step backwards from freedom.
We are failing to keep up with the influx of free software users, failing to teach people about
freedom and our community as fast as they enter it. This is why non-free software (which Qt
was when it first became popular), and partially non-free operating system distributions, find
such fertile ground. To stop using the word “free” now would be a mistake; we need more,
not less, talk about freedom.
If those using the term “open source” draw more users into our community, that is a
contribution, but the rest of us will have to work even harder to bring the issue of freedom
to those users' attention. We have to say, “It's free software and it gives you freedom!”—
more and louder than ever before.
Would a Trademark Help?
The advocates of “open source software” tried to make it a trademark, saying this would
enable them to prevent misuse. This initiative was later dropped, the term being too
descriptive to qualify as a trademark; thus, the legal status of “open source” is the same as
that of “free software”: there is no legal constraint on using it. I have heard reports of a
number of companies' calling software packages “open source” even though they did not
fit the official definition; I have observed some instances myself.
But would it have made a big difference to use a term that is a trademark? Not necessarily.
Companies also made announcements that give the impression that a program is “open source
software” without explicitly saying so. For example, one IBM announcement, about a
program that did not fit the official definition, said this:
As is common in the open source community, users of the ... technology will also be able to collaborate with IBM ...
This did not actually say that the program was “open source”, but many readers did not
notice that detail. (I should note that IBM was sincerely trying to make this program free
software, and later adopted a new license which does make it free software and “open
source”; but when that announcement was made, the program did not qualify as either one.)
And here is how Cygnus Solutions, which was formed to be a free software company
and subsequently branched out (so to speak) into proprietary software, advertised some
proprietary software products:
Cygnus Solutions is a leader in the open source market and has just launched two products into the [GNU/]Linux marketplace.
Unlike IBM, Cygnus was not trying to make these packages free software, and the packages did
not come close to qualifying. But Cygnus didn't actually say that these are “open source
software”, they just made use of the term to give careless readers that impression.
These observations suggest that a trademark would not have truly prevented the
confusion that comes with the term “open source”.
Misunderstandings(?) of “Open Source”
The Open Source Definition is clear enough, and it is quite clear that the typical non-free
program does not qualify. So you would think that “Open Source company” would mean
one whose products are free software (or close to it), right? Alas, many companies are trying
to give it a different meaning.
At the “Open Source Developers Day” meeting in August 1998, several of the commercial
developers invited said they intend to make only a part of their work free software (or “open
source”). The focus of their business is on developing proprietary add-ons (software or
manuals) to sell to the users of this free software. They ask us to regard this as legitimate, as
part of our community, because some of the money is donated to free software development.
In effect, these companies seek to gain the favorable cachet of “open source” for their
proprietary software products—even though those are not “open source software”—because
they have some relationship to free software or because the same company also maintains
some free software. (One company founder said quite explicitly that they would put, into
the free package they support, as little of their work as the community would stand for.)
Over the years, many companies have contributed to free software development. Some of
these companies primarily developed non-free software, but the two activities were
separate; thus, we could ignore their non-free products, and work with them on free
software projects. Then we could honestly thank them afterward for their free software
contributions, without talking about the rest of what they did.
We cannot do the same with these new companies, because they won't let us. These
companies actively invite the public to lump all their activities together; they want us to
regard their non-free software as favorably as we would regard a real contribution, although
it is not one. They present themselves as “open source companies,” hoping that we will get a
warm fuzzy feeling about them, and that we will be fuzzy-minded in applying it.
This manipulative practice would be no less harmful if it were done using the term “free
software.” But companies do not seem to use the term “free software” that way; perhaps its
association with idealism makes it seem unsuitable. The term “open source” opened the
door for this.
At a trade show in late 1998, dedicated to the operating system often referred to as “Linux”, the
featured speaker was an executive from a prominent software company. He was probably
invited on account of his company's decision to “support” that system. Unfortunately, their form
of “support” consists of releasing non-free software that works with the system—in other words,
using our community as a market but not contributing to it.
He said, “There is no way we will make our product open source, but perhaps we will make
it ‘internal’ open source. If we allow our customer support staff to have access to the source
code, they could fix bugs for the customers, and we could provide a better product and better
service.” (This is not an exact quote, as I did not write his words down, but it gets the gist.)
People in the audience afterward told me, “He just doesn't get the point.” But is that so? Which point did he not get?
He did not miss the point of the Open Source movement. That movement does not say
users should have freedom, only that allowing more people to look at the source code and
help improve it makes for faster and better development. The executive grasped that point
completely; unwilling to carry out that approach in full, users included, he was considering
implementing it partially, within the company.
The point that he missed is the point that “open source” was designed not to raise: the
point that users deserve freedom.
Spreading the idea of freedom is a big job—it needs your help. That's why we stick to the
term “free software” in the GNU Project, so we can help do that job. If you feel that freedom
and community are important for their own sake—not just for the convenience they bring—
please join us in using the term “free software”.
The free software movement was started by Richard M. Stallman and GNU in
1984, later the Free Software Foundation was founded.
Free software is defined by the offering of 4 basic freedoms:
The freedom to run the program, for any purpose (freedom 0).
The freedom to study how the program works, and adapt it to your needs (freedom
1). Access to the source code is a precondition for this. The freedom to redistribute copies so you can help your neighbor (freedom 2).
The freedom to improve the program, and release your improvements to the
public, so that the whole community benefits (freedom 3). Access to the source code
is a precondition for this.
Non-free software is also called proprietary software. Free software should not be
confused with freeware; freeware is free as in free beer, not as in freedom.
Benefits of Free and Open Source Software
These freedoms benefit users in many ways. Without access to the code and the right to
modify it and distribute it, a distribution like openSUSE would not be possible at all.
Fix the software
These freedoms mean that you can fix bugs, which exist in all software, or you can
change the software to do what you need it to do, or even fix security issues. In the case of
proprietary software you can ask the provider to add functionality and fix bugs, and
maybe they'll do it when it suits them, maybe not.
Share
Free software allows you to share software and thus help your friends and neighbours
without you having to breach licenses.
Know and control what is going on
With proprietary software you can't know what a given program _really_ does. Some very
well known proprietary software has been caught spying on users and sending information
about their behaviour and such. Proprietary software also has a tendency to include various
digital restrictions on what the user can do, when, for how long, etc. With free software
you have access to the source code and can study what the program does and change it if
you don't like it.
Technical benefits
Open source code makes it possible for more people to see the code and fix it, it can be
developed faster and become better. This system of "peer review" can be compared to the
way scientific research works. In comparison proprietary code is kept secret and rarely
seen by anybody outside the company behind it.
Economic benefits
It's also a way in which companies can share development costs. For example Novell and
Red Hat are competitors yet they develop many of the same programs and thus help each
other. IBM and HP could also be seen as competitors yet they both contribute to the
Linux kernel, etc., thus sharing development costs.
Free software makes a competitive market for support possible, potentially heightening the
quality of support. With proprietary software only the provider who has access to the
source code can realistically offer decent support, and thus has a kind of monopoly.
5. Where I can use Linux?
You can use Linux as Server Os or as stand alone Os on your PC. (But it is best suited for
Server.) As a server Os it provides different services/network resources to client. Server
Os must be:
Stable Robust Secure High Performance
Linux offers all of the above characteristics plus its Open Source and Free OS. So Linux
can be used as:
(1) On stand alone workstation/PC for word processing, graphics, software
development, internet, e-mail, chatting, small personal database management system etc.
(2) In network environment as:
(A) File and Print or Application Server Share the data, Connect the expensive device like printer and share it, e-mail within
the LAN/intranet etc are some of the application.
Linux Server with different Client Os
(B) Linux sever cab be connected to Internet, So that PC's on intranet can share the
internet/e-mail etc. You can put your web sever that run your web site or transmit the
information on the internet.
Linux Server can act as Proxy/Mail/WWW/Router Server etc.
So you can use Linux for:
Personal Work Web Server Software Development Workstation Workgroup Server
In Data Center for various server activities such as FTP, Telnet, SSH, Web,
Mail, Proxy, Proxy Cache Appliance etc
6. What Kernel Is?
Kernel is hart of Linux Os.
It manages resource of Linux Os. Resources means facilities available in Linux. For e.g.
Facility to store data, print data on printer, memory, file management etc .
Kernel decides who will use this resource, for how long and when. It runs your programs
(or set up to execute binary files).
The kernel acts as an intermediary between the computer hardware and
various programs/application/shell.
It's Memory resident portion of Linux. It performance following task :-
I/O management Process management Device management File management Memory management
Unit 2
1. Linux Essential Commands a. File Administration
S Commmands with Explanation .
N
o
1 ls [option(s)] [file(s)] If you run
.ls without any additional parameters, the program will list the contents of the
current directory in short form.
-l detailed list
-a displays hidden files
2 p [option(s)] sourcefile
. targetfile Copies sourcefile to
targetfile.
-i Waits for confirmation, if necessary, before an existing targetfile is overwritten
-r Copies recursively (includes subdirectories)
3 mv [option(s)] sourcefile
. targetfile Copies sourcefile to
targetfile then deletes the original
sourcefile.
-b Creates a backup copy of the sourcefile before moving
-i Waits for confirmation, if necessary, before an existing targetfile is overwritten
4 rm [option(s)] file(s) Removes
. the specified files from the file system.
Directories are not removed by rm
unless the option -r is used.
-r Deletes any existing subdirectories
-i Waits for confirmation before deleting each file.
5 ln [option(s)] sourcefile
. targetfile Creates an internal link
from the sourcefile to the
targetfile, under a different name.
-s Creates a symbolic link
6 cd [options(s)] [directory]
. Changes the current directory.
cd without any parameters changes to
the user's home directory.
7 mkdir [option(s)] directoryname
. Creates a new directory.
8 rmdir [option(s)] directoryname
. Deletes the specified directory,
provided it is already empty.
9 chown [option(s)]
. username.group file(s) Transfers
the ownership of a file to the user with
the specified user name.
-R Changes files and directories in all subdirectories.
1 chgrp [option(s)] groupname
0 file(s) Transfers the group
. ownership of a given file to the group
with the specified group name. The file
owner can only change group
ownership if a member of both the
existing and the new group.
1 chmod [options] mode file(s)
1 Changes the access permissions.The
. mode parameter has three parts: group, access, and access type. group accepts the following characters:
u user g
group o
others
For access, access is granted by the + symbol and denied by the - symbol.
The access type is controlled by the following options:
r read
w write
x eXecute — executing files or changing to the directory.
s Set uid bit — the application or program is started as if it were started by the owner of the file.
1 gzip [parameters] file(s) This
2 program compresses the contents of
. files, using complex mathematical
algorithms. Files compressed in this way
are given the extension .gz and need
to be uncompressed before they can be used. To compress several files or even
entire directories, use the tar command.
-d decompresses the packed gzip files so they return to their original size and can be processed normally (like the command gunzip).
1 tar options archive file(s) The
3 tar puts one file or (usually) several files
.into an archive. Compression is optional.
tar is a quite complex command with a number of options available. The most frequently used options are:
-f Writes the output to a file and not to the screen as is usually the case
-c Creates a new tar archive
-r Adds files to an existing archive
-t Outputs the contents of an archive
-u Adds files, but only if they are newer than the files already contained in the archive
-x Unpacks files from an archive (extraction) -z Packs the resulting archive with gzip
-j Compresses the resulting archive with bzip2
-v Lists files processed
The archive files created by tar end with .tar. If the tar archive was also
compressed using gzip, the ending is .tgz or .tar.gz. If it was compressed
using bzip2, .tar.bz2. 1 locate pattern(s) The locate
4 command can find in which directory a
. specified file is located. If desired, use wild cards to specify file names. The
program is very speedy, as it uses a
database specifically created for the
purpose (rather than searching through
the entire file system). The database
can be generated by root with
updatedb.
1 find [option(s)] The find
5 command allows you to search for a file
. in a given directory. The first argument
specifies the directory in which to start
the search. The option -name must be
followed by a search string, which may
also include wild cards. Unlike locate,
which uses a database, find scans the
actual directory.
b.Commands to Access File Contents
S Commands with explanation
.
N
o
.
cat [option(s)] file(s) The cat
1 command displays the contents of a
. file, printing the entire contents to the
screen without interruption.
-n Numbers the output on the left margin
2 less [option(s)] file(s) This command can be used to browse the contents of the
. specified file. Scroll half a screen page up or down with PgUp and PgDn or a full screen page down with Space. Jump to the beginning or end of a file using Home and End. Press
Q to exit the program.
3 grep [option(s)] searchstring
. filenames
The grep command finds a specific
searchstring in the specified
file(s). If the search string is found,
the command displays the line in which
the searchstring was found along
with the file name.
-i Ignores case
-l Only displays the names of the
respective files, but not the text lines
-n Additionally displays the numbers of
the lines in which it found a hit
-l Only lists the files in which searchstring does not occur
4 diff [option(s)] file1 file2
. The diff command compares the contents of any two files. The output produced by the
program lists all lines that do not match.This is frequently used by programmers who need only send their program alterations and not the entire source code.
-q Only reports whether the two
given files differ
c. File Systems
1. mount [option(s)] [<device>] mountpoint
This command can be used to mount any data media, such as hard disks, CD-
ROM drives, and other drives, to a directory of the Linux file system.
-r mount read-only
-t filesystem
Specifies the file system. The most common are ext2 for Linux hard disks, msdos
for MS-DOS media, vfat for the Windows file system, and iso9660 for CDs.
2. umount [option(s)] mountpoint
This command unmounts a mounted drive from the file system. To prevent data loss,
run this command before taking a removable data medium from its drive. Normally,
only root is allowed to run the commands mount and umount. To enable other users
to run these commands, edit the /etc/fstab file to specify the option user for the
respective drive.
d. System Commands
System Information
df [option(s)] [directory]
The df (disk free) command, when used without any options, displays information about
the total disk space, the disk space currently in use, and the free space on all the
mounted drives. If a directory is specified, the information is limited to the drive on which that directory is located.
-H : shows the number of occupied blocks in gigabytes, megabytes, or kilobytes — in human-readable format
-t : Type of file system (ext2, nfs, etc.)
du [option(s)] [path]
This command, when executed without any parameters, shows the total disk space occupied by files and subdirectories in the current directory.
-a : Displays the size of each individual file
-h : Output in human-readable form
-s : Displays only the calculated total size
free [option(s)]
The command free displays information about RAM and swap space usage, showing the total and the used amount in both categories.
-b : Output in bytes
-k : Output in kilobytes
-m : Output in megabytes
date [option(s)]
This simple program displays the current system time. If run as root, it can also be used to change the system time. Details about the program are available in date.
Processes
top [options(s)]
top provides a quick overview of the currently running processes. Press H to access a page that briefly explains the main options to customize the program.
ps [option(s)] [process ID]
If run without any options, this command displays a table of all your own programs or
processes — those you started. The options for this command are not preceded by hyphen.
aux
Displays a detailed list of all processes, independent of the owner.
kill [option(s)] process ID
Unfortunately, sometimes a program cannot be terminated in the normal way.
However, in most cases, you should still be able to stop such a runaway program by
executing the kill command, specifying the respective process ID (see top and ps).
kill sends a TERM signal that instructs the program to shut itself down. If this does
not help, the following parameter can be used:
-9
Sends a KILL signal instead of a TERM signal, with which the process really is
annihilated by the operating system. This brings the specific processes to an end
in almost all cases.
killall [option(s)] processname
This command is similar to kill, but uses the process name (instead of the process ID) as an argument, causing all processes with that name to be killed.
Network
ping [option(s)] host name|IP address
The ping command is the standard tool for testing the basic functionality of TCP/IP
networks. It sends a small data packet to the destination host, requesting an
immediate reply. If this works, ping displays a message to that effect, which indicates
that the network link is basically functioning.
-c : number Determines the total number of packages to send and ends after they have been dispatched. By default, there is no limitation set.
-f flood ping: sends as many data packages as possible. A popular means, reserved to root, to test networks.
-i :value Specifies the interval between two data packages in seconds. Default: one second
nslookup
The Domain Name System resolves domain names to IP addresses. With this
tool, send queries to information servers (DNS servers).
telnet [option(s)] host name or IP address
Telnet is actually an Internet protocol that enables you to work on remote hosts
across a network. telnet is also the name of a Linux program that uses this protocol to
enable operations on remote computers. Warning
Miscellaneous
passwd [option(s)] [username]
Users may change their own passwords at any time using this command.
Furthermore, the administrator root can use the command to change the password of
any user on the system.
su [option(s)] [username]
The su command makes it possible to log in under a different user name from a
running session. When using the command without specifying a user name, you will
be prompted for the root password. Specify a user name and the corresponding
password to use the environment of the respective user. The password is not
required from root, as root is authorized to assume the identity of any user.
halt [option(s)]
To avoid loss of data, you should always use this program to shut down your system.
reboot [option(s)]
Does the same as halt with the difference that the system performs an immediate
reboot.
clear
This command cleans up the visible area of the console. It has no options.
2. File system in Linux
In a computer the hard disk forms a
physical medium which can store files,
and thus forms a file system. The major file system types in Linux are
1) EXT3
2) EXT4
3) VFAT
4) Swap
Ext3 and Ext4 stands for extended file system. VFAT is a filesystem equivalent to the
windows FAT ( File Allocation Table ) file system, it stands for Virtual FAT. Ext3 and Ext4 are used to create and access logical volume. VFAT is used in external medias like Pendrive and all.
Swap is used to create a swap area in the hard disk, which can be used as a virtual memory. The
total memory which a running application can see is the sum of physical memory ( RAM ) and the Virtual memory ( Swap ).
In Linux the file system is maintained in a
hierachial method. The “/” forms the root of all file system, under which all other directories are mounted.
Mount points and their Usage
Directory Usage
4. /bin:Binary directory; Stores commands used in Linux 5. /boot:Store files like boot loader, required during boot time 6. /dev:Device information directory. Device files are kept here 7. /etc:System configuration files are stored here 8. /home:Document directory of all normal users 9. /root:Document directory of super-user 10. /mnt:Mount directory for manual mounting 11. /media:Auto-mount directory 12. /lib:Shared libraries and kernel modules are stored here 13. /lost + found:Back-up point for ext3 file system 14. /proc:Process information directory. It provides interface to kernel data structures 15. /tmp:Directory provided for storing temporary files 16. /sbin:Directory for storing only default system commands 17. /var:Varying file directory for storing regularly updating files 18. /opt:Optional directory for installing additional software
Linux has three types of users
1) Super-user: Super-user in Linux is
called “root”. Root user has complete
previlage in Linux. Only he has the
administratory power.
2) Normal-user: Normal user doesn't
have administrator power. Normal users
have only a limited access. It is the root
user which creates the normal user. In
certain Linux OS there is no root user,
like in Ubuntu, Linuxmint etc. In such
cases we can use commands to give
normal user an administrative privilage.
3) System user :System users are the
users created by applications in the
system. For example in servers the
application allows only authorised users
to access its service.
In Linux, everything is configured as a file. This includes not only text files, images and compiled programs (also referred to as executables), but also directories, partitions and hardware device drivers.
Each filesystem (used in the first sense) contains a control block, which holds
information about that filesystem. The other blocks in the filesystem are inodes, which contain information about individual files, and data blocks, which contain the information stored in the individual files.
o the Linux kernel, however, the
filesystem is flat. That is, it does not (1) have a hierarchical structure, (2) differentiate between directories, files or
programs or (3) identify files by names. Instead, the kernel uses inodes to represent each file.
An inode is actually an entry in a list of inodes referred to as the inode list.
Each inode contains information about a file including(1) its inode number (a unique identification number)
(2) the owner and group associated with the file,
(3) the file type (for example, whether it is a regular file or a directory),
(4) the file's permission list,
(5) the file creation, access and modification times,
(6) the size of the file and
(7) the disk address (i.e., the location on the disk where the file is physically stored).
3.Vi Editor
Vi is a command line text editor. As you would be quite aware now, the command line is
quite a different environment to your GUI. It's a single window with text input and output
only. Vi has been designed to work within these limitations and many would argue, is
actually quite powerful as a result. Vi is intended as a plain text editor (similar to Notepad on
Windows, or Textedit on Mac) as opposed to a word processing suite such as Word or Pages.
It does, however have a lot more power compared to Notepad or Textedit.
There are two modes in Vi. Insert (or Input) mode and Edit mode. In input mode you
may input or enter content into the file. In edit mode you can move around the file,
perform actions such as deleting, copying, search and replace, saving etc.
When we run vi we normally issue it with a single command line argument which is the file
you would like to edit.
vi <file>
When you run this command it opens up the file.
You always start off in edit mode so the first thing we are going to do is switch to insert mode by pressing i. You can tell when you are in insert mode as the bottom left corner will tell you.
ZZ (Note: capitals) - Save and exit
:q! - discard all changes, since the last save, and exit :w - save file but don't exit :wq - again, save and exit
Below are some of the many commands you may enter to move around the file. Have a
play with them and see how they work.
Arrow keys - move the cursor around j, k, h, l - move the cursor down, up, left and right (similar to the arrow keys) ^ (caret) - move cursor to beginning of current line $ - move cursor to end of the current line nG - move to the nth line (eg 5G moves to 5th line) G - move to the last line w - move to the beginning of the next word nw - move forward n word (eg 2w moves two words forwards)
b - move to the beginning of the previous word nb - move back n word { - move backward one paragraph } - move forward one paragraph
Below are some of the many ways in which we may delete content within vi. Have a
play with them now. (also check out the section below on undoing so that you can undo
your deletes.)
x - delete a single character nx - delete n characters (eg 5x deletes five characters) dd - delete the current line
dn - d followed by a movement command. Delete to where the movement command
would have taken you. (eg d5w means delete 5 words)
Undoing changes in vi is fairly easy. It is the character u.
u - Undo the last action (you may keep pressing u to keep undoing) U (Note: capital) - Undo all changes to the current line
4.Linux Security Models
SELinux is a system with MAC, or Mandatory Access Control. It implements a security policy called RBAC, Role-Based Access Control. This policy is implemented by DTAC, which is Dynamically Typed Access Control, which also translates as the domain name of Access Control System.
SELinux is sponsored by the U.S. National Security Agency. The money goes to the team that is
working on SELinux, the Secure Computing Corp. It should be noted that this company also owns all of the patents for the software.
In 1992, a new idea for security resulted in a project called Distributed Trusted Match. The
project developed some innovative solutions, which became part of an operating system called Fluke. Fluke evolved into Flux, which led to the development of Flask architecture. The Flask architecture was then integrated with the Linux kernel, and the whole newly created project was called SELinux.
Linux was chosen because of two main features:
• growing popularity • open developement environment
Three elements of this system is very important.
1. The Kernel.: SELinux uses the Linux kernel called LSM, Linux Security Modules. This
infrastructure provides all possible interfaces that allow you to fully control access to all
system objects when they are initiated by user actions.
2.SELinux allows all programs a little bit of freedom
3.They define access rights, the right to pursue activities in the system, and the behavior of SELinux system.
SELinux RBAC system with built-in rules that are implemented by DTAC. SELinux system also extends what is called the philosophy of SUID. Thus, it can also extend the understanding of the role of users in the system.
SELinux is an implementation of a mandatory access control (MAC) security model in Linux
operating system (OS). This mechanism resides inside the Linux kernel and checks for allowed operations after standard Linux discretionary access controls (DAC) are enabled
Information security is made up of the following main attributes:
•Availability - Prevention of loss of access to resources and data
•Integrity- Prevention of unauthorized modification of data
•Confidentiality- Prevention of unauthorized disclosure of data
Thesecurity kernel is made up of mechanisms that fall within the Trust
Computing Base and implementsand
enforces the reference monitor
concept. The security kernel is made
up of hardware,firmware, and
software components that mediate all
access and functions between subjects
and objects. The security kernel is the
core of the TCB and is the most
commonlyused approach to building
trusted computing systems. There are
four main requirements of the security
kernel:
•It must provide isolation for the processes carrying out the reference
monitor
concept
•The reference monitor must be invoked for every access attempt and
must be
impossible to circumvent. Thus, the reference monitor must be
implemented
in a complete and foolproof way.
•The reference monitor must be verifiable as being correct. This means
that all
decisions made by the reference monitor should be written to an audit
log, andverified as being correct.
•It must be small enough to be able to be tested and verified in a complete
and
comprehensive manner.
A security policy is a set of rules and practices dictating how sensitive
information is managed, protected,
and distributed. A security policy
expresses exactly what the security
level should be by setting the goals of
what the security mechanisms are to
accomplish.
Security policies that prevent information from flowing from a high security level to a lower security
level are called multilevel security
policies
. These types of policies permit a subject to access an object only if the
subject’s security level is higher than
or equal to the object’s classification.
A security model maps the abstract goals of the policy to information
system terms by specifying explicit
data structures and techniques
necessary to enforce the security
policy. A security model is usually
represented in mathematics and
analytical ideas, which are then
mapped to system specifications, and
then developed by programmers
through programming code.
The security model takes this requirement and provides the
necessary mathematical formulas,
relationships, and structure to be
followed to accomplish this goal.
Some security models enforce rules to protect confidentiality, such as the
Bell- LaPadula model. Other models
enforce rules to protect integrity, such
as the Biba model. Formal security
models, such as Bell-LaPadula and
Biba, are used to provide high
assurance in security. Informal
models, such as Clark-Wilson, are
used more as a framework to describe
how security policies should be
expressed and executed.
Understanding DAC and MAC Linux security models
As SELinux is based on the concept of MAC, it is very important to understand
shortfalls of DAC (the default Linux security model) and advantages of MAC over DAC.
Under MAC, administrators control all interactions of software on the system. The concept of
least privilege is used, and by default applications and users have no rights, as all rights must
be granted by an administrator as part of the system’s security policy. Under DAC, files are
owned by a user and that user has full control over them. An attacker who penetrates an
account can do anything with the files owned by that user. For example, a hacker gaining
access to a FTP server will have full control over all files owned by the FTP server account.
Worse, if an application runs under the context of the root user (which is common for
services like Web and FTP), an attacker will have full control over the entire system.
MAC provides each application with a virtual sandbox that only allows the application to
perform the tasks it is designed for and are explicitly allowed in the security policy. For
example, the Web server may only be able to read Web published files and serve them on a
specified network port. An attacker penetrating it will not be able to perform any activities not
expressly permitted by the security policy, even if the process is running as the root user.
Standard Unix permissions will still be present on the system, and will be consulted before the
SELinux policy when access attempts are made. If the standard permissions would deny access,
access is simply denied and SELinux is not involved. However if the standard file permissions
would allow access, the SELinux policy is consulted and access is either allowed or denied based
on the security contexts of the source process and the targeted object.
How SELinux defines subjects and objects
There are two important concepts, subjects and objects, in MAC's security context. A MAC
(or non-discretionary access control) framework allows you to define permissions for how
all processes, called subjects, interact with other parts of the system such as files, devices,
sockets, ports, and other processes -- objects. This is done through an administratively-
defined security policy over all processes and objects. These processes and objects are
controlled through the kernel, and security decisions are made on all available information
rather than just user identity. With this model, a process can be granted just the permissions
it needs to be functional. This follows the principle of least privilege, which contrasts with
the full privilege concept of DAC.
Under MAC, for example, users who have exposed their data using "chmod" are protected
by the fact that their data is only associated with user home directories, and confined
processes cannot touch those files without permission and purpose written into the policy.
SELinux security policies: Strict and targeted
SELinux follows the model of least-privilege. By default, everything is denied and then a
policy is written that gives each element of the system (a service, program, user, process)
only the access required to perform specified function. If a service, program or user tries to
access or modify a file or resource that is not necessary for it to function then access is
denied and the action is logged. Because SELinux is implemented within the kernel,
individual applications do not need to be especially written or modified to work with
SELinux. If SELinux blocks an action, this appears as just a normal "access denied" type
error to the application.
The flow of SELinux with default targeted policy can be depicted as following logical
block diagram:
Click on image for
larger version
One of the most important concepts is SELinux policy. The model of least-privilege best
describes the “strict” policy. SELinux allows different policies and the default policy in
CentOS 5 and RHEL is the “targeted” policy that targets and confines key system processes.
In RHEL, over 200 targets exist (including httpd, named, dhcpd, mysqld). Everything else on
the system runs in an unconfined domain and is unaffected by SELinux. The goal is for
every process that is installed and running at boot by default to be running in a confined
domain. The targeted policy is designed to protect as many key processes as possible without
adversely affecting the end user experience and most users should be totally unaware that
SELinux is even running.
Another important concept is SELinux access control. There are three forms of access control,
which are type enforcement (TE), role-based access control (RBAC) and multi-level security
(MLS). Among these, TE is primary mechanism of access control in targeted policy.
Setting up the SELinux security context
It is very important to understand, that all processes and files in SELinux model posseses an
SELinux security context.This security context can easily be displayed by using "-Z" flag
as shown below:
Click on image for larger version
Most of the SELinux troubleshooting revolves around security contexts of objects. These
security contexts are in format of user:role:type:mls. Field "mls" is always hidden (as being
default in targeted policy), so for example for hello.pl file, root is user, object_r is the role
and type is httpd_sys_content_t. Within the default targeted policy, type is the important
field used to implement TE, shown above.
Similarly you can list the security context of all processes running in Linux using same "-
Z" flag with ps command, for example
#ps -efZ | grep mail
system_u:system_r:sendmail_t root 2661 1 0 12:30 ? 00:00:00 sendmail: accepting connections
system_u:system_r:sendmail_t smmsp 2670 1 0 12:30 ? 00:00:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue
The above output shows that Sendmail processes on our Linux server are running
under "Sendmail_t" type domain.
5.Partitions creation:
5.1. Formating Partitions
At the shell prompt, I begin making the file systems on my partitions. Continuing with the
example in this is: # mke2fs /dev/sda1
I need to do this for each of my partitions, but not for /dev/sda4 (my extended partition).
Linux supports types of file systems other than ext2. You can find out what kinds your
kernel supports by looking in: /usr/src/linux/include/linux/fs.h
The most common file systems can be made with programs in /sbin that start with
"mk" like mkfs.msdos and mke2fs.
5.2. Activating Swap Space
To set up a swap partition: # mkswap -f /dev/hda5
To activate the swap area: # swapon /dev/hda5
Normally, the swap area is activated by the initialization scripts at boot time.
5.3. Mounting Partitions
Mounting a partition means attaching it to the linux file system. To mount a linux partition: # mount -t ext2 /dev/sda1 /opt
-t ext2
File system type. Other types you are likely to use are:
ext3 (journaling sile system based on ext2)
msdos (DOS)
hfs (mac)
iso9660 (CDROM)
nfs (network file system)
/dev/sda1
Device name. Other device names you are likely to use:
/dev/hdb2 (second partition in second IDE drive)
/dev/fd0 (floppy drive A)
/dev/cdrom (CDROM)
/opt
mount point. This is where you want to "see" your partition. When you type ls
/opt, you can see what is in /dev/sda1. If there are already some directories and/or
files under /opt, they will be invisible after this mount command.
5.4. Some facts about file systems and fragmentation
Disk space is administered by the operating system in units of blocks and fragments of
blocks. In ext2, fragments and blocks have to be of the same size, so we can limit our
discussion to blocks.
Files come in any size. They don't end on block boundaries. So with every file a part of the
last block of every file is wasted. Assuming that file sizes are random, there is approximately
a half block of waste for each file on your disk. Tanenbaum calls this "internal
fragmentation" in his book "Operating Systems".
You can guess the number of files on your disk by the number of allocated inodes on a
disk. On my disk # df -i Filesystem Inodes IUsed IFree %IUsed Mounted on
/dev/hda3 64256 12234 52022 19% /
/dev/hda5 96000 43058 52942 45% /var
there are about 12000 files on / and about 44000 files on /var. At a block size of 1 KB,
about 6+22 = 28 MB of disk space are lost in the tail blocks of files. Had I chosen a
block size of 4 KB, I had lost 4 times this space.
Data transfer is faster for large contiguous chunks of data, though. That's why ext2 tries
to preallocate space in units of 8 contigous blocks for growing files. Unused preallocation
is released when the file is closed, so no space is wasted.
Noncontiguous placement of blocks in a file is bad for performance, since files are often
accessed in a sequential manner. It forces the operating system to split a disk access and the
disk to move the head. This is called "external fragmentation" or simply "fragmentation" and
is a common problem with MS-DOS file systems. In conjunction with the abysmal buffer
cache used by MS-DOS, the effects of file fragmentation on performance are very
noticeable. DOS users are accustomed to defragging their disks every few weeks and some
have even developed some ritualistic beliefs regarding defragmentation.
None of these habits should be carried over to Linux and ext2. Linux native file systems do
not need defragmentation under normal use and this includes any condition with at least
5% of free space on a disk. There is a defragmentation tool for ext2 called defrag, but users
are cautioned against casual use. A power outage during such an operation can trash your
file system. Since you need to back up your data anyway, simply writing back from your
copy will do the job.
The MS-DOS file system is also known to lose large amounts of disk space due to internal
fragmentation. For partitions larger than 256 MB, DOS block sizes grow so large that they
are no longer useful (This has been corrected to some extent with FAT32). Ext2 does not
force you to choose large blocks for large file systems, except for very large file systems in
the 0.5 TB range (that's terabytes with 1 TB equaling 1024 GB) and above, where small
block sizes become inefficient. So unlike DOS there is no need to split up large disks into
multiple partitions to keep block size down.
Use a 1Kb block size if you have many small files. For large partitions, 4Kb blocks are fine.
6.Shell Introduction
String operators in shell use unique among programming languages curly-bracket syntax. In
shell any variable can be displayed as ${name_of_the_variable} instead of
$name_of_the_variable. This notation was introduced to protect a variable name from
merging with string that comes after it, but was extended to allow string operations on
variables. Here is example in which it is used for separation of a variable $var and a string
"_string" using curly brackets: $ export var='test' $ echo ${var}_string # var is a variable that uses syntax ${var} with
the value test $ echo $var_string # var_string is a variable that doesn't exist,
echo doesn't print anything
In Korn 88 shell this notation was extended to allow expressions inside curvy brackets.
For example ${var:=moo}
Each operation is encoded using special symbol or two symbols ( "digram", for example :- , :=
, etc). An argument that the operator may need is positioned after the symbol of the operation.
Later this notation extended ksh93 and adopted by bash and other shells.
This "ksh-originated" group of operators is the most popular and probably the most widely
used group of string-handling operators so it makes sense to learn them, if only in order to
be able to modify old scripts.
6.1 Implementation of classic string operations in shell
Despite shell deficiencies in this area and idiosyncrasies preserved from 1970th most
classic string operations can be implemented in shell. You can define functions that behave
almost exactly like in Perl or other "more normal" language. In case shell facilities are not
enough you can use AWK or Perl. It's actually sad that AWK was not integrated into shell.
6.1.1Length Operator
There are several ways to get length of the string.
The simplest one is ${#varname}, which returns the length of the value of the
variable as a character string. For example, if filename has the value ntp.conf,
then ${#filename} would have the value 8.
The second is to use built-in function expr, for example
expr length $string
or
expr "$string" : '.*'
Additional example from Advanced Bash-Scripting Guide
stringZ=abcABC123ABCabc
echo ${#stringZ} # 15
echo `expr length $stringZ` # 15
echo `expr "$stringZ" : '.*'` # 15
For checking if length of the string is zero you can use -n STRING operator.
More complex example. Here's the function for validating that that string is within a
given max length. It requires two parameters, the actual string and the maximum length the
string should be. check_length()
# check_length # to call: check_length string max_length_of_string {
# check we have the right params if (( $# != 2 )) ; then
echo "check_length need two parameters: a string and
max_length" return 1 fi
if (( ${#1} > $2 )) ; then return 1
fi
return 0
}
You could call the function check_length like this: #!/usr/bin/bash # test_name while :
do echo -n "Enter customer name
:" read NAME [ check_length $NAME 10 ] && break echo "The string $NAME is longer then 10
characters" done
echo $NAME
6.2Determining the Length of Matching Substring at Beginning of String
This is pretty rarely used capability of expr built-in but still sometimes it can be useful: expr match "$string" '$substring'
where:
String is any variable of literal string. $substring is a regular expression.
my_regex=abcABC123ABCabc # |------ |
echo `expr match "$my_regex" 'abc[A-Z]*.2'` # 8
echo `expr "$my_regex" : 'abc[A-Z]*.2'` # 8
7.String processing
7.1Manipulating Strings
Bash supports a surprising number of string manipulation operations. Unfortunately, these
tools lack a unified focus. Some are a subset of parameter substitution, and others fall under
the functionality of the UNIX expr command. This results in inconsistent command syntax
and overlap of functionality, not to mention confusion.
String Length
${#string}
expr length $string
These are the equivalent of strlen() in C.
expr "$string" : '.*'
stringZ=abcABC123ABCabc
echo ${#stringZ} # 15
echo `expr length $stringZ` # 15
echo `expr "$stringZ" : '.*'` # 15
Example 7-1. Inserting a blank line between paragraphs in a text file #!/bin/bash
# paragraph-space.sh # Ver. 2.1, Reldate 29Jul12 [fixup]
# Inserts a blank line between paragraphs of a single-spaced text file. # Usage: $0 <FILENAME
MINLEN=60 # Change this value? It's a judgment call. # Assume lines shorter than $MINLEN characters ending in a period
#+ terminate a paragraph. See exercises below.
while read line # For as many lines as the input file has ...
do echo "$line" # Output the line itself.
len=${#line}
if [[ "$len" -lt "$MINLEN" && "$line" =~ [*{\.}]$ ]]
# if [[ "$len" -lt "$MINLEN" && "$line" =~ \[*\.\] ]] # An update to Bash broke the previous version of this script. Ouch! # Thank you, Halim Srama, for pointing this out and suggesting a fix.
then echo # Add a blank line immediately
fi #+ after a short line terminated by a period.
done
exit
# Exercises: # --------- # 1) The script usually inserts a blank line at the end
#+ of the target file. Fix this.
# 2) Line 17 only considers periods as sentence terminators.
# Modify this to include other common end-of-sentence characters,
#+ such as ?, !, and ".
Length of Matching Substring at Beginning of
String expr match "$string" '$substring'
$substring is a regular expression.
expr "$string" : '$substring'
$substring is a regular expression.
stringZ=abcABC123ABCabc # |------| # 12345678
echo `expr match "$stringZ" 'abc[A-Z]*.2'` # 8
echo `expr "$stringZ" : 'abc[A-Z]*.2'` # 8
Index
expr index $string $substring
Numerical position in $string of first character in $substring that matches.
stringZ=abcABC123ABCabc
# 123456 ... echo `expr index "$stringZ" C12` # 6
# C position.
echo `expr index "$stringZ" 1c` # 3
# 'c' (in #3 position) matches before '1'.
This is the near equivalent of strchr() in C.
Substring Extraction
${string:position}
Extracts substring from $string at $position.
If the $string parameter is "*" or "@", then this extracts the positional parameters, [1] starting at $position.
${string:position:length}
Extracts $length characters of substring from $string at $position.
stringZ=abcABC123ABCabc
# 0123456789.....
# 0-based indexing.
echo ${stringZ:0} # abcABC123ABCabc
echo ${stringZ:1} # bcABC123ABCabc
echo ${stringZ:7} # 23ABCabc
echo ${stringZ:7:3} # 23A
# Three characters of substring.
# Is it possible to index from the right end of the string?
echo ${stringZ:-4} # abcABC123ABCabc # Defaults to full string, as in ${parameter:-default}. # However . . .
echo ${stringZ:(-4)} # Cabc
echo ${stringZ: -4} # Cabc # Now, it works. # Parentheses or added space "escape" the position parameter.
# Thank you, Dan Jacobson, for pointing this out.
The position and length arguments can be "parameterized," that is, represented as a
variable, rather than as a numerical constant.
Example 7-2. Generating an 8-character "random" string
#!/bin/bash # rand-string.sh # Generating an 8-character "random" string.
if [ -n "$1"
] then
# If command-line argument present,
#+ then set start-string to it. str0="$1"
else # Else use PID of script as start-string.
str0="$$"
fi
POS=2 # Starting from position 2 in the string.
LEN=8 # Extract eight characters.
str1=$( echo "$str0" | md5sum | md5sum )
# Doubly scramble ^^^^^^ ^^^^^^ #+ by piping and repiping to md5sum.
randstring="${str1:$POS:$LEN}"
# Can parameterize ^^^^ ^^^^
echo "$randstring"
exit $?
# bozo$ ./rand-string.sh my-password # 1bdd88c4
# No, this is is not recommended
#+ as a method of generating hack-proof passwords.
If the $string parameter is "*" or "@", then this extracts a maximum of $length
positional parameters, starting at $position.
echo ${*:2} echo ${@:2}
# Echoes second and following positional parameters.
# Same as above.
echo ${*:2:3}
# Echoes three positional parameters, starting at second.
expr substr $string $position $length
Extracts $length characters from $string starting at $position.
stringZ=abcABC123ABCabc
# 123456789......
# 1-based indexing.
echo `expr substr $stringZ 1 2` # ab
echo `expr substr $stringZ 4 3` # ABC
expr match "$string" '\($substring\)'
Extracts $substring at beginning of $string, where $substring is a
regular expression.
expr "$string" : '\($substring\)'
Extracts $substring at beginning of $string, where $substring is a
regular expression.
stringZ=abcABC123ABCabc # =======
echo `expr match "$stringZ" '\(.[b-c]*[A-Z]..[0-9]\)'` # abcABC1
echo `expr "$stringZ" : '\(.[b-c]*[A-Z]..[0-9]\)'` # abcABC1
echo `expr "$stringZ" : '\(.......\)'` # abcABC1
# All of the above forms give an identical result.
expr match "$string" '.*\($substring\)'
Extracts $substring at end of $string, where $substring is a regular
expression.
expr "$string" : '.*\($substring\)'
Extracts $substring at end of $string, where $substring is a regular
expression.
stringZ=abcABC123ABCabc
# ======
echo `expr match "$stringZ" '.*\([A-C][A-C][A-C][a-c]*\)'` # ABCabc
echo `expr "$stringZ" : '.*\(......\)'` # ABCabc
Substring Removal
${string#substring}
Deletes shortest match of $substring from front of $string.
${string##substring}
Deletes longest match of $substring from front of $string.
stringZ=abcABC123ABCabc
# |----| shortest # |----------| longest
echo ${stringZ#a*C} # 123ABCabc
# Strip out shortest match between 'a' and 'C'.
echo ${stringZ##a*C} # abc
# Strip out longest match between 'a' and 'C'.
# You can parameterize the
substrings. X='a*C'
echo ${stringZ#$X} # 123ABCabc echo ${stringZ##$X} # abc
# As above.
${string%substring}
Deletes shortest match of $substring from back of $string.
For example: # Rename all filenames in $PWD with "TXT" suffix to a "txt" suffix. # For example, "file1.TXT" becomes "file1.txt" . . .
SUFF=TXT
suff=txt
for i in $(ls *.$SUFF) do
mv -f $i ${i%.$SUFF}.$suff # Leave unchanged everything *except* the shortest pattern match #+
starting from the right-hand-side of the variable $i . . .
done ### This could be condensed into a "one-liner" if desired.
# Thank you, Rory Winston.
${string%%substring}
Deletes longest match of $substring from back of $string.
stringZ=abcABC123ABCabc # || shortest
# |------------ | longest
exit 0
# Exercise:
# --------
# As it stands, this script converts *all* the files in the current #+ working directory.
# Modify it to work *only* on files with a ".mac" suffix.
# *** And here's another way to do it. *** #
rm -f $file
echo "$filename.$SUFFIX" done
# Strip ".mac" suffix off filename #+ ('.*c' matches everything #+ between '.' and 'c', inclusive).
$OPERATION $file > "$filename.$SUFFIX"
# Redirect conversion to new filename.
# Delete original files after converting.
# Log what is happening to stdout.
for file in $directory/* do
filename=${file%.*c}
# Filename globbing.
# Assumes all files in the target directory are MacPaint image files, #+ with a ".mac" filename suffix.
# If directory name given as a script argument...
# Otherwise use current working directory.
if [ -n "$1" ] then
directory=$1 else
directory=$PWD fi
# New filename suffix.
OPERATION=macptopbm SUFFIX=pbm
echo ${stringZ%%b*c}
# a
# Strip out longest match between 'b' and 'c', from back of $stringZ.
This operator is useful for generating filenames.
Example 7-3. Converting graphic file formats, with filename change
#!/bin/bash
# cvt.sh:
# Converts all the MacPaint image files in a directory to "pbm" format.
# Uses the "macptopbm" binary from the "netpbm" package,
#+ which is maintained by Brian Henderson ([email protected]).
# Netpbm is a standard part of most Linux distros.
back of $stringZ.
echo ${stringZ%b*c} # abcABC123ABCa
# Strip out shortest match between 'b' and 'c', from
#!/bin/bash
# Batch convert into different graphic formats. # Assumes imagemagick installed (standard in most Linux distros).
INFMT=png
OUTFMT=pd
f
# Can be tif, jpg, gif, etc. # Can be tif, jpg, gif, pdf, etc.
for pic in *"$INFMT" do
p2=$(ls "$pic" | sed -e s/\.$INFMT//)
# echo $p2 convert "$pic"
$p2.$OUTFMT done
exit $?
Example 7-4. Converting streaming audio files to ogg
#!/bin/bash
# ra2ogg.sh: Convert streaming audio files (*.ra) to ogg.
# Uses the "mplayer" media player program: # http://www.mplayerhq.hu/homepage
# Uses the "ogg" library and "oggenc": # http://www.xiph.org/ # # This script may need appropriate codecs installed, such as sipr.so ... # Possibly also the compat-libstdc++ package.
OFILEPREF=${1%%ra} # Strip off the "ra" suffix.
OFILESUFF=wav # Suffix for wav file.
OUTFILE="$OFILEPREF""$OFILESUFF" E_NOARGS=85
if [ -z "$1" ]
# Must specify a filename to convert.
then
echo "Usage: `basename $0` [filename]"
exit $E_NOARGS
fi
#########################################################################
#
mplayer "$1" -ao pcm:file=$OUTFILE
oggenc "$OUTFILE" # Correct file extension automatically added by oggenc. #########################################################################
#
rm "$OUTFILE"
# Delete intermediate *.wav file. # If you want to keep it, comment out above line.
exit $?
# Note:
# ---- # On a Website, simply clicking on a *.ram streaming audio file #+
usually only downloads the URL of the actual *.ra audio file.
# You can then use "wget" or something similar
#+ to download the *.ra file itself.
# Exercises:
# ---------
# As is, this script converts only *.ra filenames. # Add flexibility by permitting use of *.ram and other filenames.
#
# If you're really ambitious, expand the script #+ to do automatic downloads and conversions of streaming audio files. # Given a URL, batch download streaming audio files (using
"wget") #+ and convert them on the fly.
A simple emulation of getopt using substring-extraction constructs.
Example 7-5. Emulating getopt
#!/bin/bash # getopt-simple.sh # Author: Chris Morgan # Used in the ABS Guide with permission.
getopt_simple()
{
echo "getopt_simple()"
echo "Parameters are '$*'"
until [ -z "$1" ] do
echo "Processing parameter of: '$1'"
if [ ${1:0:1} = '/' ]
then
tmp=${1:1} # Strip off leading '/' . . . parameter=${tmp%%=*} # Extract name.
value=${tmp##*=} # Extract value.
echo "Parameter: '$parameter', value: '$value'"
eval $parameter=$value
fi shift
done
}
# Pass all options to getopt_simple(). getopt_simple $*
echo "test is '$test'"
echo "test2 is '$test2'"
exit 0 # See also, UseGetOpt.sh, a modified version of this script.
---
sh getopt_example.sh /test=value1 /test2=value2
Parameters are '/test=value1 /test2=value2'
Processing parameter of: '/test=value1'
Parameter: 'test', value: 'value1' Processing parameter of: '/test2=value2' Parameter: 'test2', value: 'value2'
test is 'value1'
test2 is 'value2'
Substring Replacement
${string/substring/replacement}
Replace first match of $substring with $replacement. [2]
${string//substring/replacement}
Replace all matches of $substring with $replacement.
stringZ=abcABC123ABCabc
echo ${stringZ/abc/xyz}
# xyzABC123ABCabc # Replaces first match
of 'abc' with 'xyz'.
echo ${stringZ//abc/xyz}
# xyzABC123ABCxyz # Replaces all matches of 'abc' with # 'xyz'.
echo ---------------
echo "$stringZ"
echo ---------------
# abcABC123ABCabc
# The string itself is not altered!
# Can the match and replacement strings be parameterized? match=abc
repl=000
echo ${stringZ/$match/$repl} # 000ABC123ABCabc
# ^ ^ ^^^ echo ${stringZ//$match/$repl} # 000ABC123ABC000
# Yes! ^ ^ ^^^ ^^^
echo
# What happens if no $replacement string is supplied? echo ${stringZ/abc} # ABC123ABCabc echo
${stringZ//abc} # ABC123ABC
# A simple deletion takes place.
${string/#substring/replacement}
If $substring matches front end of $string, substitute $replacement for
$substring.
${string/%substring/replacement}
If $substring matches back end of $string, substitute $replacement for
$substring.
7.2. Manipulating strings using awk
A Bash script may invoke the string manipulation facilities of awk as an alternative to
using its built-in operations.
Example 7-6. Alternate ways of extracting and locating substrings #!/bin/bash
# substring-extraction.sh
String=23skidoo1 # 012345678 Bash
# 123456789 awk
# Note different string indexing system: # Bash numbers first character of string as 0. # Awk numbers first character of string as 1.
echo ${String:2:4} # position 3 (0-1-2), 4 characters long
# skid
# The awk equivalent of ${string:pos:length} is substr(string,pos,length). echo | awk '
{ print substr("'"${String}"'",3,4) # skid
}
' # Piping an empty "echo" to awk gives it dummy input,
#+ and thus makes it unnecessary to supply a filename.
echo "----"
# And likewise:
echo | awk '
{ print index("'"${String}"'", "skid") # 3
} # (skid starts at position 3) ' # The awk equivalent of "expr index" ...
exit 0
8.Inverstigating and
managing processes:
1. Process
A process is a set of instructions loaded into memory. Numeric Process ID (PID) used for identification. UID, GID and SELinux context determines filesystem access. The Linux Kernel tracks every aspect of a process by its PID under /proc/PID.
Listing Process
The ps command is used to view the process information. By Default, shows processes from the current terminal
Options
a: Shows processes from all the terminals. x: Shows all the processes owned by you, or shows all the processes when used
together with the a option (such as: ps ax) Including processes that are not controlled
by a terminal Such as Daemon processes, This shows up as ? in the tty column of the
output
u: Shows process owner information. f: Shows process parentage
o: Shows custom information Such as pid, tty, stat, nice, %cpu, %mem, time,
comm, command, euser, ruser
Examples: [mitesh@Matrix ~]$ ps [mitesh@Matrix ~]$ ps a [mitesh@Matrix ~]$ ps x [mitesh@Matrix ~]$ ps u [mitesh@Matrix ~]$ ps f [mitesh@Matrix ~]$ ps xo pid,tty,stat,%cpu,%mem,time,command,euser,ruser [mitesh@Matrix ~]$ ps axo pid,tty,stat,%cpu,%mem,time,command,euser,ruser
Process Status
Every process has a state property, which describes whether the process is actively
using the cpu (Running), in memory but not doing anything (Sleeping), waiting for a
resource to become available (Uninterruptable Sleep) or terminated but not flushed
from the process list (Zombie).
Running and Sleeping are normal, but the presence of Uninterruptable Sleep or
Zombie processes may indicate problems lurking on your system.
Uninterruptable Sleep
Process is sleeping and can not be woken up until an event occurs. It can not be woken up by a signal.
Typically, the result of I/O operations, such as a failed network connections (For
NFS Hard Mounts).
Zombie
Just before a process dies, it sends a signal to its parent and waits for an
acknowledgment before terminating.
Even if the parent process does not immediately acknowledge this signal, all
resources except for the process identity number (PID) are released.
Zombie processes are cleared from the system during the next system reboot And
do not adversely affect system performance.
Finding Process # Most Flexible [mitesh@Matrix ~]$ ps axo pid,tty,comm | grep 'cups' 1516 ? cupsd 3066 pts/1 eggcups
# By predefined patterns: pgrep [mitesh@Matrix ~]$ pgrep -U root [mitesh@Matrix ~]$ pgrep -G mitesh
[mitesh@Matrix ~]$ pgrep cups 1516 3066
# By exact program name: pidof
[mitesh@Matrix ~]$ pidof cupsd 1516
2. Signals
Signals are simple messages that can be communicated to processes with
commands like kill. Sent directly to processes, no user interface required. Programs associate actions with each signal.
Signals are specified by name or number when sent man 7 signals shows complete
list
Signal 1 HUP (SIGHUP) Re-read Configuration Files Signal 9 KILL (SIGKILL) Terminate Immediately Signal 15 TERM (SIGTERM) Terminate Cleanly Signal 18 CONT (SIGCONT) Continue If Stopped Signal 19 STOP (SIGSTOP) Stop Process
Sending Signals to Process
By PID: kill [signal] pid ... By Pattern: pkill [signal] pattern By Name: killall [signal] command ...
kill can send many signals, but processes only respond to those signals whose they
have been programmed to recognize.
For Example: Most services are programmed to reload their configuration when
they receive a HUP(1) signals.
Some processes are terminated when they completed their tasks.
Interactive applications may need the user to issue a quit command.
In other cases, processes may need to be terminated with Ctrl+c, which sends an
INT(2) signal to the process.
The process is shutdown cleanly means Terminate child process first & Complete any
pending I/O operations.
NOTE!: The KILL(9) signal should be used only if a process will not respond to a Ctrl+c
or a TERM(15) signals. Using KILL(9) signal on a routine basis may cause zombie
processes and lost data. # The following are all identical and will send default TERM(15) signal to the process with PID number 3705 [mitesh@Matrix ~]$ kill 3705
[mitesh@Matrix ~]$ kill -15 3705
[mitesh@Matrix ~]$ kill -TERM 3705 [mitesh@Matrix ~]$ kill -SIGTERM 3705
3. Scheduling Priority
Every running process has a scheduling priority: A ranking among running
processes determining which should get the attention of the processor.
Priority is affected by a process’ nice value. The nice value range from -20 to 19 ( Default is 0 )
-20: Highest CPU 19: Lowest CPU
Altering Scheduling Priority
Niceness value may be altered…
When starting a process
[mitesh@Matrix ~]$ nice -n 5 command
After Starting the process [mitesh@Matrix ~]$ renice 5 -p PID
NOTE!: Only root may decrease nice value. Non-privileged users start a process at
any positive nice value but cannot lower it once raised. [mitesh@Matrix ~]$ nice -n 10 myprog [mitesh@Matrix ~]$ renice 15 -p PID [root@Matrix ~]# renice -19 -p PID
Process Management Tools CLI - top, htop
Display list of processes running on your system, updated every 3 seconds. You can use keystrokes to kill, renice and change the sorting order of processes. Use ? key to view the complete list of hotkeys. You can exit top by pressing the q key.
GUI - gnome-system-monitor
The gnome-system-monitor, which can be run from the console Or by
selecting Applications -> System Tools -> System Monitor
Display real time process information Allows killing, re-nicing, sorting
4. Job Control
Background Process
Append the ampersand to the command line: firefox &
Suspended Running Program
Use `Ctrl+z Send STOP(19) signal
Manage Background Or Suspended Jobs
List Job Numbers and Names: jobs Resume in the Background: bg [%jobnum] Resume in the Foreground: fg [%jobnum] Send a Signal: kill [SIGNAL] [%jobnum]
Examples: [mitesh@Matrix ~]$ ping 127.0.0.1 &> /dev/null ^Z [1]+ Stopped ping 127.0.0.1 &>/dev/null
[mitesh@Matrix ~]$ bg [1]+ ping 127.0.0.1 &>/dev/null &
[mitesh@Matrix ~]$ firefox & [2] 4162
NOTE!: The number next to [2] after backgrounding firefox is the PID [mitesh@Matrix ~]$ jobs [1]- Running ping 127.0.0.1 &>/dev/null & [2]+ Running firefox &
NOTE!: The + or -`` signs next to the job numbers tells which job is the default
+ sign is the default job
5. Scheduling Process
One time jobs use at, Recurring jobs use crontab /-----------------------------------------------------------------------\
| |
| Create | at time | crontab -e |
| List | at -l | crontab -l |
| Details | at -c jobnum | |
| Remove | at -d jobnum | crontab -r |
| Edit | | crontab -e |
| |
\----------------------------------------------------------------------- /
Non-redirected output is mailed to the user The root can modify jobs for other users
at command
Scheduling One Time Job with at command One Command Per Line Terminated With Ctrl+d
Options
/----------------------------------------------------------------------- \
| at 8:00am December 7 at 7 am Thursday |
| at midnight + 23 minutes at now + 5 minutes |
\----------------------------------------------------------------------- /
/----------------------- ----------------- ----------------- ---------------- -
----------------------------- \ | Command Alias Meanning |
|----------------------- ----------------- ----------------- ---------------- -
------------------------------ | | | atq | at -l | Lists the jobs currently
pending. | | atrm | at -d jobnum | Deletes the job. | | | at -c jobnum | Cats the full environment
for the specified job. | | | \--------------------------------------------------------------------------
-----------------------------/
Example: [mitesh@Matrix ~]$ at 0200 at> date at> cal at> <EOT> job 1 at 2011-08-26 02:00
[mitesh@Matrix ~]$ atq [mitesh@Matrix ~]$ at -l 1 2011-08-26 02:00 a mitesh
crontab command
Scheduling Recurring Jobs with crontab command The cron mechanism is controlled by a process named crond.
This process runs every minute and determines if an entry in user’s cron tables need
to be executed.
The crontabs are stored in /var/spool/cron/
The root can modify the jobs for other users with crontab -u username and any of
the other options, such as -e.
Crontab File Format
Comment lines begin with #. One entry per line, no limit to line length. Entry consist of five space-delimited fields followed by a command name. Fields are Minute, Hour, Day Of Month, Month, Day Of week. An asterisk (*) in a field represent all valid values. Multiple values are separated by commas. See man 5 crontab for more details
/----------------------------------------------------------------------- \
| |
| Minute | 0-59 |
| Hour | 0-23 |
| Day Of Month | 1-31 |
| Month | 1-12 (Or Jan, Feb, Mar, Etc) |
| Day Of Week | 0-7 (Or Sun, Mon, Tue, Etc) |
| (0 or 7 = Sunday, 1 = Monday) |
| |
\----------------------------------------------------------------------- /
# * * * * * command to execute # │ │ │ │ │ # │ │ │ │ │ # │ │ │ │ └───── day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0)
# │ │ │ └────────── month (1 - 12) # │ │ └─────────────── day of month (1 - 31) # │ └──────────────────── hour (0 - 23) # └───────────────────────── min (0 - 59)
Example: [mitesh@Matrix ~]$ crontab -e #Min Hour DOM Month DOW Command 0 0 31 10 *mail -s "boo" $LOGNAME < boo.txt 0 2 * * *netstat -tulpn | diff - /media/cdrom/baseline 0 4 * * 1,3,5 find ~ -name core | xargs rm -f {}
crontab in details
The cron mechanism is controlled by a process named crond.
This process runs every minute and determines if an entry in user’s cron tables need to
be executed.
The crontabs are stored in /var/spool/cron/`
The root can modify the jobs for other users with crontab -u username and any of
the other options, such as -e. Crontab File Format
Comment lines begin with #. One entry per line, no limit to line length. Entry consist of five space-delimited fields followed by a command name. Fields are Minute, Hour, Day Of Month, Month, Day Of week. An asterisk (*) in a field represent all valid values. Multiple values are separated by commas.
Special Time Specification Nicknames: @reboot, @yearly, @annually,
@monthly, @weekly, @daily, @hourly See man 5 crontab for more details
Examples: [mitesh@Matrix ~]$ crontab -e
#Min Hour DOM Month DOW Command
0 0 31 10 * mail -s "boo" $LOGNAME < boo.txt
0 2 * * * netstat -tulpn | diff - /media/cdrom/baseline
0 4 * * 1,3,5 find ~ -name core | xargs rm -f {}
*/2 * * * * echo "Every 2 Minutes" &> /dev/tty1 */5 * * * * echo "Every 5 Minutes" &> /dev/tty1
@reboot echo "Runs Once After Reboot" &> /dev/tty1
[mitesh@Matrix ~]$ echo '*/15 8-17 * * 1-5 echo Breaktime' | crontab
The Cron Access Control
/---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- --
- - -- - - - - -- - - - - - -- - - - - - -- - - - - -- - - - - - -- - - - - - -- - - - - -- - - - - - -- - - - - -- - - - - - -
\
| --------------- -------------- Only root can
install the crontab files. |
| | | /etc/cron.allow -------------- The root & All The
Listed users in cron.allow can install the crontab files. |
| | | --------------- /etc/cron.deny All The users except The users in cron.deny can install the crontab files. |
| | | /etc/cron.allow /etc/cron.deny The cron.deny file
is ignored. |
| The root & All The
Listed users in cron.allow can install the crontab files. |
\---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- --
- - -- - - - - -- - - - - - -- - - - - - -- - - - - -- - - - - - -- - - - - - -- - - - - -- - - - - - -- - - - - -- - - - - - -
/
NOTE!: Denying A User Through The Use Of Above Files Does Not Disable Their Installed
crontab.
System Crontab Files
Different Format Than User Crontab Files Default System Crontab File Is /etc/crontab The /etc/cron.d/ Directory Contains The Additional System Crontab Files
Example: [mitesh@Matrix ~]$ cat /etc/crontab SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root HOME=/
#run-parts
01 * * * * root run-parts /etc/cron.hourly 02 4 * * * root run-parts /etc/cron.daily 02 4 * * 0 root run-parts /etc/cron.weekly 42 4 1 * * root run-parts /etc/cron.monthly
NOTE!: The System Crontab Files Are Different From The Users Crontab Files In The System Crontab Files Sixth Field Is A Username, Which will Be Used To Execute
The Commands.
The run-parts Is A Shell Script (/usr/bin/run-parts). The run-parts Shell Scripts Take One Argument - A Directory Name And Invokes All Of
The Program In That Directory.
Thus, At 4:02 Every Morning, All Of The Executables In The /etc/cron.daily/ Directory
Will Be Run As The root User. Default Daily Cron Jobs
The /etc/cron.daily Are Usually Used For:
Clean Up Temporary Directories Update mlocate & whatis Database Perform Other Housekeeping Tasks
A) The tmpwatch:
Deletes All Files In /tmp Directory Which Is Not Accessed For 240 Hours
(10 Days)
Deletes All Files In /var/tmp Directory Which Is Not Accessed For 720
Hours (30 Days)
B) The logrotate:
Keeps Log Files From Getting Too Large
Rotates Log Files On Predefined Intervals (Weekly)
When Reach The Predefined Size Old Files Are Optionally Compressed
Configuration Files: /etc/logrotate.conf (Global Configuration) /etc/logrotate.d/ (Override Global Configuration)
Example: The /var/log/messages Is Rotated Weekly To /var/log/messages-yyyymmdd
The Anacron System
The Anacron Runs The Missed Cron Jobs When The System Boots.
The Anacron Command Is Used To Run The Missed Daily,Weekly & Monthly
Cron Jobs.
Example:
According To The /etc/crontab File
At 4:02 Every Morning, All Of The Executables In The /etc/cron.daily/ Directory
Will Be Run As root User.
Now Suppose Your Laptop Is Almost Always Off At The 4:02 AM, Then The
mlocate & whatis Database Is Never Be Updated.
Configuration File
/etc/anacrontab Field1: If The Cron Jobs Not Been Run For The Specified No Of Days Field2: Wait For The Specified No Of Minutes Before Runs Field3: Job Identifier Field4: The Cron Job To Run
Examples: [mitesh@Matrix ~]$ cat /etc/anacrontab # /etc/anacrontab: configuration file for anacron # See anacron(8) and anacrontab(5) for details.
SHELL=/bin/sh PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root # the maximal random delay added to the base delay of the jobs RANDOM_DELAY=45 # the jobs will be started during the following hours only START_HOURS_RANGE=3-22
# Period In Days Delay In Minutes Job-Identifier Command 1 5 cron.daily nice run-parts /etc/cron.daily 7 25 cron.weekly nice run-parts /etc/cron.weekly @monthly 45 cron.monthly nice run-parts /etc/cron.monthly
How Anacron Works
According To The /etc/crontab File
The 1st Command To Run Is 0anacron. The 0anacron Command Sets The Last Run Timestamp In A
/var/spool/anacron/cron.{daily,weekly,monthly} Files.
On The System Boot Up, The Anacron Commands Runs.
The /etc/anacrontab File Specify How Often The Commands In cron.daily/
cron.weekly/ and cron.monthly/ Should Be Runs. If These Commands Are Not Runs In This Time Then
The Anacron Command Waits For The Specified No Of Minutes In
The /etc/anacrontab File & Then Runs The Commands
6. Grouping Commands
Two ways to group commands
Compound
Example: date; who | wc -l Commands run back to back
Subshell
Commands inside parentheses are run in their own instance of bash, called subshell. Example: (date; who | wc -l) All output is sent to a single STDOUT and STDERR
Suppose you want to maintain a count of the number of users logged on, Along with
a time/date stamp, in a log file.
Examples:
[mitesh@Matrix ~]$ date >> logfile [mitesh@Matrix ~]$ who | wc -l >> logfile
[mitesh@Matrix ~]$ date; who | wc -l Tue Aug 30 14:04:31 IST 2011 3
[mitesh@Matrix ~]$ date; who | wc -l >> logfile Tue Aug 30 14:05:08 IST 2011
[mitesh@Matrix ~]$ (date; who | wc -l) >> logfile
7. Exit Status
Processes report success or failure with an exit status. 0 for success 1-255 for failure
$? stores the exit status of the most recent command exit [num] terminates and set status to num
Examples:
[mitesh@Matrix ~]$ ping -c1 -w1 localhost &> /dev/null [mitesh@Matrix ~]$ echo $? 0
[mitesh@Matrix ~]$ ping -c1 -w1 station999 &> /dev/null [mitesh@Matrix ~]$ echo $? 2
8. Conditional Execution Operators
* Commands can be run conditionally based on exit status. && Represents conditional AND THEN || Represents conditional OR ELSE
NOTE!: When executing two commands separated by &&,
The 2nd command runs if the 1st command exit successfully (Exit status 0).
When executing two commands separated by ||, The 2nd command runs if the 1st command fails (Exit status in the range of 1 to 255).
Examples:
[mitesh@Matrix ~]$ grep -q 'no_such_user' /etc/passwd || echo "No such user" No such user
[mitesh@Matrix ~]$ ping -c1 -w2 localhost &> /dev/null \ > && echo "Localhost is up" \ > || (echo "Localhost is unreachable"; exit 1) Localhost is up [mitesh@Matrix ~]$ echo $? 0
[mitesh@Matrix ~]$ ping -c1 -w2 station999 &> /dev/null \ > && echo "Station999 is up" \ > || (echo "station999 is unreachable"; exit 1) station999 is unreachable [mitesh@Matrix ~]$ echo $? 1
#!/bin/bash for x in $(seq 1 10) do
echo adding test$x ( echo -ne "test$x\t" useradd test$x 2>&1 > /dev/null && mkpasswd
test$x ) >> /tmp/userlog done echo 'cat /tmp/userlog to see new passwords'
9. test command
The test command evaluates true or false scenarios to simplify conditional execution.
Returns 0 for true Returns 1 for false
NOTE!: Strings should be compared using a Mathematical Operator, While Numbers
are compared using an Abbreviation (-eq).
Examples:
# Long Form $ test "$A" = "$B" && echo "Strings are equal" $ test "$A" -eq "$B" && echo "Integers are equal"
# Shorthand $ [ "$A" = "$B" ] && echo "Strings are equal" $ [ "$A" -eq "$B" ] && echo "Integers are equal"
File Tests
Use the following command for complete list man test /--------------------------------------------------------------------------
--------------------- \
| | | -d FILE | FILE exists and is a directory
| | -e FILE | FILE exists
| | -f FILE | FILE exists and is a regular file
| | -h FILE | FILE exists and is a symbolic link (same as
-L) |
| -L FILE | FILE exists and is a symbolic link (same as
-h) | | -r FILE | FILE exists and read permission is granted
| | -s FILE | FILE exists and has a size greater than
zero | | -w FILE | FILE exists and write permission is granted
| | -x FILE | FILE exists and execute (or search)
permission is granted |
| -O FILE | FILE exists and is owned by the effective
user ID | | -G FILE | FILE exists and is owned by the effective
group ID | | | \--------------------------------------------------------------------------
---------------------/
Example: $ [ -f ~/lib/functions ] && source ~/lib/functions
10. Scripting If Statements
Every process reports an exit status. 0 for success 1-255 for failure
Execute instructions based on a exit status of the command. #!/bin/bash if ping -c1 -w2 station1 &> /dev/null then
echo "Station1 is up" elif grep "station1" ~/maintenance.txt &> /dev/null then
echo "Station1 is undergoing maintenance" else
echo "Station1 is unexpectedly DOWN!" exit 1
fi
The exit status can be checked within the body of the if as shown in the example,
Or you can assign the exit status to a variable using a subshell, as in:
STATUS=$(test -x /bin/ping6) The if structure can be combined with conditional operator
#!/bin/bash if test -x /bin/ping6; then
ping6 -c1 ::1 &> /dev/null && echo "IPv6 stack is up" elif test -x /bin/ping; then
ping -c1 127.0.0.1 &> /dev/null && echo "No IPv6, but IPv4 stack is up" else
echo "Oops! This should not happen." exit 255
fi
9.Installing Application:
Online installation:
#1. Through software manager(linux mint)/software center(ubuntu):
first open the terminal and run this command to get the latest version of the software:
sudo apt-get update
then
1. open software manager/center. it's in the menu. 2. search your desired software in the search box 3. if it's in the list then it will appear before you . if it's not in the list follow the instructions
in the ppa installation section of this tutorial.
4. now double click on the desired software entry and then click "install". 5. it will be installed on your system as per your network connection speed.
#2. Through synaptic package manager: if it is absent in your linux distribution then you will have to install it through software manager/center first. to me it's the best way to install softwares in linux.
1. open synaptic package manager. click reload to get the latest version of the softwares. 2. search your desired software/s in the search box. 3. right click each software you want to install and mark them for installing. it will mark additional dependencies on it's own.
if your softwares not in the list follow the instructions in the ppa installation section
of this tutorial.
4. after marking for installing, click apply
6. it will download and install the marked softwares.
additional info:
if you have a list of softwares then save the file with the list, with .list extension (this file
should contain the exact package name one at every line with an extra string "install"
included after each package name preceding by an space/tab). then go to file->read markings
and then brows to the file and open it. synaptic will mark the softwares in the list
automatically.
#3. Through terminal:
if you know the exact name of the software then you can install it through terminal by
simply entering the command:
sudo apt-get update (to get the latest version)
sudo apt-get install software-package-name
that's it.....
if it says "unable to locate package..." then follow the instructions in the ppa
installation section of this tutorial.
#4. PPA installation:
if your software's not in the software list then it may come from private package
archives (PPA's).
these are private development of softwares so use it at your own risk.
steps:
1. search google for the ppa address for your software. (like ppa for package-name)
2. then add it to the repository by entering this command in terminal:
sudo add-apt-repository ppa:.....whatever_it_is
3. then run this command (must)
sudo apt-get update
4. now your desired software is in the list. so you can follow one of the above
processes (#1,#2,#3)
Offline installation:
say, you downloaded your desired softwares from some website. in this case if you don't
trust the origin of the software then don't install it or install it at your own risk.
Your downloaded softwares may come as a .zip, tar.gz, tar.bz2, .deb, .rpm, .tgz, tar.xz or
any other types of archives.
if you are in linux mint or in ubuntu or in a debian based OS try to download .deb
packages because it's easier to install in debian based OS.
#5. installing .deb packages:
through terminal:
cd path_to_the_directory_that_contains_the_.deb_file
sudo dpkg -i filename.deb
through gdebi package manager:
if gdebi is not installed then you have to install it through one of the processes #1,#2,#3
(requires internet connection)
1. then double click on the .deb file or open the file with gdebi package manager and click
install. 2. it will be istalled soon.
#6. installing .rpm packages:
rpm has to be installed in the system, otherwise follow one of the processes #1,#2,#3
to install rpm (requires internet connection)
code:
cd path_to_the_directory_that_contains_the_.rpm_file
sudo rpm -i filename.rpm
#7. installing from archives(.zip tar.gz.......etc):
these archives generally conatains the source of the package. each of them generally has a
different approach to install. I will be discussing a common method which will
supposedly work for all of them. general requirements:
1. flex 2. bison or bison++ 3. python
As these archives contains the source, your system needs the required programming
languages to compile and build the source. so the general requirement packages stated
above may not be sufficient for you. in that case you have to install the required packages
through one of the processes #1,#2,#3 (requires internet connection). you can know about
the dependencies about your software in a readme file included in the archives.
steps:
1. open the archives with archive manager by double clicking it, then extract it. 2. code:
cd path-to-the-extracted-folder
3. inside the extracted folder look carefully.... a. if you find a file named configure then :
code: ./configure
make
sudo make install
if the first code fails to execute then run this code before above codes:
chmod +x configure
b. if you find a file named install.sh then
code :
chmod +x install.sh
./install.sh or sudo ./install.sh (if it needs root permission)
or you can double click it and select run in terminal or simply run
N.B : sometimes there is a file, something like your_software_name.sh is found instead of
install.sh. for this case you have to replace install.sh with the correct name in the previous
codes.
c. if you find a file named install then
code:
chmod +x install
./install or sudo ./install (if it needs root permission)
or you can double click it and select run in terminal or simply run
d. if you find a file named make (if there is no configure file) then
code:
make
sudo make install
e. If you still can't find the required files
then it may be in a special folder (generally in a folder named 'bin'). move to this folder
with cd command with appropriate path and then look again and follow the same process.
#8. pre installed archives:
some packages are archived as pre installed packages i.e you don't have to install them ,
you just need to extract them in a secure place and theres an executable file (name is
generally same as the software name) in the extracted folder or in child folders like
bin,build etc. you have to find it and make it executable.
Example: eclipse, adt bundle (android developing tool)
code to make executable:
chmod +x filename_with_exact_path
then you can run it with :
code:
filename
or double clicking it and selecting run in terminal or run, whatever supports your software.
#9. installing .sh files:
some softwares come with a .sh file to install it
chmod +x filename.sh
./filename.sh or sudo ./filename.sh (if it needs root permission)
or double click it and select run in terminal or run, whatever supports your software.
#10. installing .run files:
some softwares come with a .run file to install
it chmod +x filename.run
./filename.run or sudo ./filename.run (if it needs root permission)
or double click it and select run in terminal or run, whatever supports your software.
Additional info about offline installation:
Often, softwares generally have a lot of dependencies. You need to download all of them,
which sometimes can be very difficult and tiring. for this you can follow the instructions
here which will save your time and effort.
UNIT-III OPEN SOURCE WEB SERVERS
INSTALLATION, CONFIGURATION AND ADMINISTRATION OF APACHE
The web server software that your web host uses. Unless you are creating ASP.NET
applications on Microsoft IIS, your host is likely to use Apache: the most widespread and
fully-featured web server available. It is open-source project so it does not cost anything
to download or install.
The following instructions describe how to install Apache on Windows. Mac OSX comes
with Apache and PHP, although you might need to enable them. Most Linux users will have
Apache pre-installed or available in the base repositories.
The Apache Installation Wizard
An excellent official .msi installation wizard is available from the Apache download page.
This option is certainly recommended for novice users or perhaps those installing Apache
for the first time.
Manual Installation
Manual installation offers several benefits:
backing up, reinstalling, or moving the web server can be achieved in seconds (see 8
Tips for Surviving PC Failure)
you have more control over how and when Apache starts
you can install Apache anywhere, such as a portable USB drive (useful for
client demonstrations).
Step 1: configure IIS, Skype and other software (optional)
If you have a Professional or Server version of Windows, you may already have IIS
installed. If you would prefer Apache, either remove IIS as a Windows component or disable
its services.
Apache listens for requests on TCP/IP port 80. The default installation of Skype also listens on
this port and will cause conflicts. To switch it off, start Skype and choose Tools > Options > Advanced > Connection. Ensure you untick “Use port 80 and 443 as alternatives for
incoming connections”.
Step 2: download the files
We are going to use the unofficial Windows binary from Apache Lounge. This version has
performance and stability improvements over the official Apache distribution, although I
am yet to notice a significant difference. However, it is provided as a manually installable
ZIP file from www.apachelounge.com/download/
You should also download and install the Windows C++ runtime from Microsoft.com. You
may have this installed already, but there is no harm installing it again.
As always, remember to virus scan all downloads.
Step 2: extract the files
We will install Apache in C:Apache2, so extract the ZIP file to the root of the C: drive.
Apache can be installed anywhere on your system, but you will need to change the
configuration file paths accordingly…
Step 3: configure Apache
Apache is configured with the text file confhttpd.conf contained in the Apache folder.
Open it with your favourite text editor.
Note that all file path settings use a ‘/’ forward-slash rather than the Windows backslash.
If you installed Apache anywhere other than C:Apache2, now is a good time to search and
replace all references to “c:/Apache2”.
There are several lines you should change for your production environment:
Line 46, listen to all requests on port 80:
Listen *:80
Line 116, enable mod-rewrite by removing the # (optional, but useful):
LoadModule rewrite_module modules/mod_rewrite.so
Line 172, specify the server domain name:
ServerName localhost:80
Line 224, allow .htaccess overrides:
AllowOverride All
Step 4: change the web page root (optional)
By default, Apache return files found in its htdocs folder. I would recommend using a
folder on an another drive or partition to make backups and re-installation easier. For the
purposes of this example, we will create a folder called D:WebPages and change httpd.conf
accordingly:
Line 179, set the root:
DocumentRoot "D:/WebPages"
and line 204:
<Directory "D:/WebPages">
Step 5: test your installation
Your Apache configuration can now be tested. Open a command box (Start > Run > cmd)
and enter:
Correct any httpd.conf configuration errors and retest until none appear.
Step 6: install Apache as a Windows service
The easiest way to start Apache is to add it as a Windows service. From a command
prompt, enter:
Open the Control Panel, Administrative Tools, then Services and double-click Apache2.2. Set
the Startup type to “Automatic” to ensure Apache starts every time you boot your PC.
Alternatively, set the Startup type to “Manual” and launch Apache whenever you
choose using the command “net start Apache2.2”.
Step 7: test the web server
Create a file named index.html in Apache’s web page root (either htdocs or D:WebPages)
and add a little HTML code:
Ensure Apache has started successfully, open a web browser and enter the
address http://localhost/. If all goes well, your test page should appear.
Ngnix
s a web server which can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache. The software was created by Igor Sysoev and first publicly released
in 2004. A company of the same name was founded in 2011 to provide support and Nginx
plus paid software.
Nginx is free and open-source software, released under the terms of a BSD-like license. A
large fraction of web servers use NGINX, often as a load balancer.
FEATURES:
Nginx can be deployed to serve dynamic HTTP content on the network
using FastCGI, SCGI handlers for scripts, WSGIapplication servers or Phusion
Passenger modules, and it can serve as a software load balancer.
Nginx uses an asynchronous event-driven approach, rather than threads, to handle
requests modular event-driven architecture can provide more predictable performance
under high loads
Ngnix Vs Ngnix plus
There are two versions of Nginx, OSS Nginx and Nginx Plus. Nginx Plus offers additional
features not included in OSS Nginx, such as Active health checks, session persistence based
on cookies, DNS service discovery integration, Cache Purging API, AppDynamic, Datalog,
Dynatrace New Relic plug-ins, Active-Active HA with config sync, Key-Value Store, on-the-
fly with zero downtime updates upstream configurations and key-value stores using Nginx
Plus API and web application firewall (WAF) dynamic module
Performance Vs Apache
Nginx was written with an explicit goal of outperforming the Apache web server.[38]
Out of
the box, serving static files, Nginx uses dramatically less memory than Apache, and can
handle roughly four times more requests per second. On the other hand, it is known to be
less stable on Windows-configured systems whereas Apache has full support.[39][40]
This
performance boost comes at a cost of decreased flexibility, such as the ability to override
systemwide access settings on a per-file basis (Apache accomplishes this with
an .htaccess file, while Nginx has no such feature built in).[41][42]
Formerly, adding third
party modules to nginx required recompiling the application from source with the modules statically linked. This was partially overcome in version 1.9.11 on February
2016, with the addition of dynamic module loading.[43]
However, the modules still must
be compiled at the same time as nginx, and not all modules are compatible with this
system— some require the older static linking process
RDBMS:
Eclipse IDE
Eclipse is an integrated development environment (IDE) for Java and other programming
languages like C, C++, PHP, and Ruby etc. Development environment provided by Eclipse
includes the Eclipse Java development tools (JDT) for Java, Eclipse CDT for C/C++, and
Eclipse PDT for PHP, among others.
This tutorial will teach you how to use Eclipse in your day-2-day life while developing any
software project using Eclipse IDE. We will give special emphasis on Java project.
Licensing
Eclipse platform and other plug-ins from the Eclipse foundation is released under the Eclipse
Public License (EPL). EPL ensures that Eclipse is free to download and install. It also allows
Eclipse to be modified and distributed.
Installing Eclipse
To install on windows, you need a tool that can extract the contents of a zip file. For
example you can use −
7-zip
PeaZip
IZArc
Using any one of these tools, extract the contents of the eclipse zip file to any folder of your
choice.
Launching Eclipse
On the windows platform, if you extracted the contents of the zip file to c:\, then you can
start eclipse by using c:\eclipse\eclipse.exe
When eclipse starts up for the first time it prompts you for the location of the workspace
folder. All your data will be stored in the workspace folder. You can accept the default or
choose a new location.
If you select "Use this as the default and do not ask again", this dialog box will not come
up again. You can change this preference using the Workspaces Preference Page.
OpenStack cloud technology
OpenStack is a relative newcomer to the IaaS space, its first release having come in late
2010. Despite the solution's presumed lack of maturity and given that it has been around for
less than two years, OpenStack is now one of the most widely used cloud stacks. Rather than
being a single solution, however, OpenStack is a growing suite of open source solutions
(including core and newly incubated projects) that together form a powerful and mature IaaS
stack. As shown in Figure 2, OpenStack is built from a core of technologies (more than what is
shown here, but these represent the key aspects). On the left side is the Horizon dashboard,
which exposes a user interface for managing OpenStack services for both users and
administrators. Nova provides a scalable compute platform, supporting the provisioning and
management of large numbers of servers and virtual machines (VMs; in a hypervisor-
agnostic manner). Swift implements a massively scalable object storage system with
internal redundancy. At the bottom are Quantum and Melange, which implement network
connectivity as a service. Finally, the Glance project implements a repository for virtual disk
images (image as a service).
Figure 2. Core and additional components of an OpenStack solution
OpenStack is a collection of projects that as a whole provide a complete IaaS solution.
Table 1 illustrates these projects with their contributing aspects.
OpenStack architecture
OpenStack is represented by three core open source projects (as shown in Figure 2): Nova
(compute), Swift (object storage), and Glance (VM repository). Nova, or OpenStack
Compute, provides the management of VM instances across a network of servers. Its
application programming interfaces (APIs) provide compute orchestration for an approach
that attempts to be agnostic not only of physical hardware but also of hypervisors. Note that
Nova provides not only an OpenStack API for management but an Amazon EC2-compatible
API for those comfortable with that interface. Nova supports proprietary hypervisors for
organizations that use them, but more importantly, it supports hypervisors like Xen and Kernel
Virtual Machine (KVM) as well as operating system virtualization such as Linux® Containers.
For development purposes, you can also use emulation solutions like QEMU.
VERSION CONTROL SYSTEMS
Version control is a system that records changes to a file or set of files over time so that you
can recall specific versions later. For the examples in this book, you will use software
source code as the files being version controlled, though in reality you can do this with
nearly any type of file on a computer. Version control systems (VCS) most commonly run as stand-alone applications,
but revision control is also embedded in various types of software such as word
processors and spreadsheets, collaborative web docs[2]
and in various content management
systems, e.g., Wikipedia's page history. Revision control allows for the ability to revert a
document to a previous revision, which is critical for allowing editors to track each other's
edits, correct mistakes, and defend against vandalism and spamming in wikis.
Version-control method of choice is to copy files into another directory (perhaps a time-
stamped directory, if they’re clever). This approach is very common because it is so
simple, but it is also incredibly error prone. It is easy to forget which directory you’re in
and accidentally write to the wrong file or copy over files you don’t mean to. To deal with this issue, programmers long ago developed local VCSs that had a simple
database that kept all the changes to files under revision control (see Figure 1-1).
Figure 1-1. Local version control diagram.
One of the more popular VCS tools was a system called rcs, which is still distributed with
many computers today. Even the popular Mac OS X operating system includes the rcs
command when you install the Developer Tools. This tool basically works by keeping patch
sets (that is, the differences between files) from one revision to another in a special format
on disk; it can then recreate what any file looked like at any point in time by adding up all
the patches.
Centralized Version Control Systems
The next major issue that people encounter is that they need to collaborate with developers
on other systems. To deal with this problem, Centralized Version Control Systems (CVCSs)
were developed. These systems, such as CVS, Subversion, and Perforce, have a single server
that contains all the versioned files, and a number of clients that check out files from that
central place. For many years, this has been the standard for version control (see Figure 1-2).
Figure 1-2. Centralized version control diagram.
This setup offers many advantages, especially over local VCSs. For example, everyone
knows to a certain degree what everyone else on the project is doing. Administrators have
fine-grained control over who can do what; and it’s far easier to administer a CVCS than it
is to deal with local databases on every client.
Distributed Version Control Systems
This is where Distributed Version Control Systems (DVCSs) step in. In a DVCS (such as
Git, Mercurial, Bazaar or Darcs), clients don’t just check out the latest snapshot of the files:
they fully mirror the repository. Thus if any server dies, and these systems were collaborating
via it, any of the client repositories can be copied back up to the server to restore it. Every
checkout is really a full backup of all the data (see Figure 1-3).
GIT
As with many great things in life, Git began with a bit of creative destruction and fiery
controversy. The Linux kernel is an open source software project of fairly large scope. For most of
the lifetime of the Linux kernel maintenance (1991–2002), changes to the software were passed
around as patches and archived files. In 2002, the Linux kernel project began using a proprietary
DVCS system called BitKeeper. Some of the goals of the new system were as follows:
Speed Simple design Strong support for non-linear development (thousands of parallel branches) Fully distributed Able to handle large projects like the Linux kernel efficiently (speed and data size)
Since its birth in 2005, Git has evolved and matured to be easy to use and yet retain these
initial qualities. It’s incredibly fast, it’s very efficient with large projects, and it has an incredible
branching system for non-linear development
The mechanism that Git uses for this checksumming is called a
SHA-1 hash. This is a 40-character string composed of hexadecimal
characters (0–9 and a–f) and calculated based on the contents of a file
or directory structure in Git
Git Generally Only Adds Data
When you do actions in Git, nearly all of them only add data to the Git database. It is very difficult to get
the system to do anything that is not undoable or to make it erase data in any way. As in any VCS, you
can lose or mess up changes you haven’t committed yet; but after you commit a snapshot into Git, it is
very difficult to lose, especially if you regularly push your database to another repository.
Concurrent Versions System (CVS)
The Concurrent Versions System (CVS), also known as the Concurrent Versioning System,
is a free client-serverrevision control system in the field of software development. A version control
system keeps track of all work and all changes in a set of files, and allows several developers
(potentially widely separated in space and time) to collaborate
FEATURES
CVS uses a client–server architecture: a server stores the current version(s) of a project and its
history, and clients connect to the server in order to "check out" a complete copy of the project, work on
this copy and then later "check in" their changes. Typically, the client and server connect over a LAN or
over the Internet, but client and server may both run on the same machine if CVS has the task of keeping
track of the version history of a project with only local developers. The server software normally
runs on Unix (although at least the CVSNT server also supports various flavours of Microsoft
Windows), while CVS clients may run on any major operating-system platform.
CVS was designed:
To exclude symbolic links because when they are stored in a version control system they can pose a
security risk. For instance, a symbolic link to a sensitive file can be stored in the repository, making the
sensitive file accessible even when it is not checked in. In place of symbolic links, scripts that require
certain privileges and conscious intervention to execute may be checked into CVS
That text files are expected to be the primary file type stored in the CVS repository[.However,
binary files are also supported, and files with a particular file extension can automatically be
recognized as being binary.
That changes would be frequently committed to the centrally checked-in copies of files in order
to aid merging and foster rapid distribution of changes to all users[ so there is no support
for distributed revision control or unpublished changes.
MySQL basics
MySQL is the most popular Open Source Relational SQL Database Management System.
MySQL is one of the best RDBMS being used for developing various web-based software
applications. MySQL is developed, marketed and supported by MySQL AB, which is a Swedish
company. This tutorial will give you a quick start to MySQL and make you comfortable with
MySQL programming.
Installation and usage
Installation Requirements
MySQL Installer requires Microsoft .NET Framework 4.5.2 or later. If this version is not installed on the
host computer, you can download it by visiting the Microsoft website.
MySQL Installer Community Edition
Download this edition from https://dev.mysql.com/downloads/installer/ to install the Community Edition of
all MySQL products for Windows. Select one of the following MySQL Installer package options:
Web: Contains MySQL Installer and configuration files only. The web package downloads only the
MySQL products you select to install, but it requires an internet connection for each download. The
size of this file is approximately 2 MB; the name of the file has the form mysql-installer-community-
web-VERSION.N.msi where VERSION is the MySQL server version number such as 8.0 and N is the
package number, which begins at 0. Full: Bundles all of the MySQL products for Windows (including the MySQL server). The file size
is over 300 MB, and its name has the form mysql-installer-community-
VERSION.N.msi where VERSION is the MySQL Server version number such as 8.0 and N is the
package number, which begins at 0.
MySQL Installer Commercial Edition
Download this edition from https://edelivery.oracle.com/ to install the Commercial (Standard or
Enterprise) Edition of MySQL products for Windows. The Commercial Edition includes all of the
current and previous GA versions in the Community Edition (excludes development-milestone versions)
and also includes the following products: Workbench SE/EE
MySQL Enterprise Backup
MySQL Enterprise Firewall
This edition integrates with your My Oracle Support (MOS) account. For knowledge-base content
and patches, see My Oracle Support.
PostgreSQL
PostgreSQL, often simply Postgres, is an object-relational database management system (ORDBMS)
with an emphasis on extensibility and standards compliance. It can handle workloads ranging from
small single-machine applications to large Internet-facing applications (or for data warehousing) with many concurrent users; on macOS Server, PostgreSQL is the default database;[9][10][11] and it is also available
for Microsoft Windows and Linux (supplied in most distributions).
PostgreSQL is ACID-compliant and transactional. PostgreSQL has updatable views and materialized
views, triggers, foreign keys; supports functions and stored procedures, and other expandability.[12]
PostgreSQL is developed by the PostgreSQL Global Development Group, a diverse group of many
companies and individual contributors.[13] It is free and open-source, released under the terms of the
PostgreSQL License, a permissive software license.
MULTIVERSION CONCURRENCY CONTROL
PostgreSQL manages concurrency through a system known as multiversion concurrency control (MVCC),
which gives each transaction a "snapshot" of the database, allowing changes to be made without being
visible to other transactions until the changes are committed. This largely eliminates the need for read
locks, and ensures the database maintains the ACID(atomicity, consistency, isolation, durability)
principles in an efficient manner. PostgreSQL offers three levels of transaction isolation: Read Committed,
Repeatable Read and Serializable. Because PostgreSQL is immune to dirty reads, requesting a Read
Uncommitted transaction isolation level provides read committed instead. PostgreSQL supports full
serializability via the serializable snapshot isolation (SSI) technique
MongoDB
mOngoDB is an open source database management system (DBMS) that uses a
document-oriented database model which supports various forms of data. It is one of
numerous nonrelational database technologies which arose in the mid-2000s under the
NoSQL banner for use in big data applications and other processing jobs involving
data that doesn't fit well in a rigid relational model. Instead of using tables and rows
as in relational databases, the MongoDB architecture is made up of collections and
documents.
MongoDB pros and cons
Like other NoSQL databases, MongoDB doesn't require predefined schemas and it
stores any type of data. This gives users the flexibility to create any number of
fields in a document, making it easier to scale MongoDB databases compared to
relational databases.
One of the advantages of using documents is that these objects map to native data
types in a number of programming languages. Also, having embedded
documents reduces the need for database joins, which can reduce costs.
databases if they haven't been configured by a database administrator.
MongoDB platforms
MongoDB is available in community and commercial versions through vendor
MongoDB Inc. MongoDB Community Edition is the open source release, while
MongoDB Enterprise Server brings added security features, an in-memory storage
engine, administration and authentication features, and monitoring capabilities
through Ops Manager.
A graphical user interface (GUI) called MongoDB Compass gives users a way to
work with document structure, conduct queries, index data and more. The
MongoDB Connector for BI allows users to connect the NoSQL database to their
business intelligence tools to visualize data and create reports using SQL queries.
Hadoop Hadoop is an Apache open source framework written in java that allows distributed processing
of large datasets across clusters of computers using simple programming models. The Hadoop
framework application works in an environment that provides distributed storage and computation across clusters of computers. Hadoop is designed to scale
up from single server to thousands of machines, each offering local computation and storage.
Hadoop Architecture At its core, Hadoop has two major layers namely −
Processing/Computation layer (MapReduce), and
Storage layer (Hadoop Distributed File System).
Hadoop
Distributed File System
The Hadoop Distributed File System (HDFS) is based on the Google File System (GFS) and
provides a distributed file system that is designed to run on commodity hardware. It has
many similarities with existing distributed file systems. However, the differences from other
distributed file systems are significant. It is highly fault-tolerant and is designed to be
deployed on low-cost hardware. It provides high throughput access to application data and is
suitable for applications having large datasets.
Apart from the above-mentioned two core components, Hadoop framework also includes the
following two modules −
Hadoop Common − These are Java libraries and utilities required by other Hadoop modules.
Hadoop YARN − This is a framework for job scheduling and cluster resource management.
Unit IV
Introduction to MY SQL
MySQL is a fast, easy-to-use RDBMS being used for many small and big businesses.
MySQL is developed, marketed and supported by MySQL AB, which is a Swedish
company. MySQL is becoming so popular because of many good reasons −
MySQL is released under an open-source license. So you have nothing to pay to use it.
MySQL is a very powerful program in its own right. It handles a large subset of the functionality
of the most expensive and powerful database packages.
MySQL uses a standard form of the well-known SQL data language.
MySQL works on many operating systems and with many languages including PHP, PERL, C,
C++, JAVA, etc.
MySQL works very quickly and works well even with large data sets.
MySQL is very friendly to PHP, the most appreciated language for web development.
MySQL supports large databases, up to 50 million rows or more in a table. The default file size
limit for a table is 4GB, but you can increase this (if your operating system can handle it) to a
theoretical limit of 8 million terabytes (TB).
MySQL is customizable. The open-source GPL license allows programmers to modify the
MySQL software to fit their own specific environments.
Create Database and Tables
1. CREATE DATABASE – create the database. To use this statement, you need the CREATE privilege
for the database. 2. CREATE TABLE – create the table. You must have the CREATE privilege for the table.
3. INSERT – To add/insert data to table i.e. inserts new rows into an existing table.
Login as the mysql root user to create database: $ mysql -u root –p mysql> CREATE TABLE authors (id INT, name VARCHAR(20), email VARCHAR(20));
mysql> SHOW TABLES;
Finally, add a data i.e. row to table books using INSERT statement, run: mysql> INSERT INTO authors (id,name,email) VALUES(1,"Vivek","[email protected]");
To display all rows i.e. data stored mysql> SELECT * FROM authors;
MySQL: Joins
This MySQL tutorial explains how to use MySQL JOINS (inner and outer) with
syntax, visual illustrations, and examples.
Description
MySQL JOINS are used to retrieve data from multiple tables. A MySQL JOIN is
performed whenever two or more tables are joined in a SQL statement.
There are different types of MySQL joins:
MySQL INNER JOIN (or sometimes called simple join)
MySQL LEFT OUTER JOIN (or sometimes called LEFT JOIN)
MySQL RIGHT OUTER JOIN (or sometimes called RIGHT JOIN)
So let's discuss MySQL JOIN syntax, look at visual illustrations of MySQL JOINS,
and explore MySQL JOIN examples.
INNER JOIN (simple join)
Chances are, you've already written a statement that uses a MySQL INNER JOIN. It is
the most common type of join. MySQL INNER JOINS return all rows from multiple
tables where the join condition is met.
Syntax
The syntax for the INNER JOIN in MySQL is:
SELECT columns
FROM table1
INNER JOIN table2
ON table1.column = table2.column;
LEFT OUTER JOIN
Another type of join is called a MySQL LEFT OUTER JOIN. This type of join returns all
rows from the LEFT-hand table specified in the ON condition and only those rows from
the other table where the joined fields are equal (join condition is met).
Syntax
The syntax for the LEFT OUTER JOIN in MySQL is:
SELECT columns
FROM table1
LEFT [OUTER] JOIN table2
ON table1.column = table2.column;
Loading and Dumping a Database
We can load a database or otherwise execute SQL commands from a file. We simply put the commands or
database into a file—let's call it mystuff.sql—and load it in with this command:
$ mysql people < mystuff.sql
We can also dump out a database into a file with this command: $ mysqldump people > entiredb.sql
UNIT-V
Server script
General Syntactic Characteristics
Five important characteristics make PHP's practical nature possible −
Simplicity
Efficiency
Security
Flexibility
Familiarity
Common uses of PHP
PHP performs system functions, i.e. from files on a system it can create, open, read, write, and
close them.
PHP can handle forms, i.e. gather data from files, save data to a file, through email you can send
data, return data to the user.
You add, delete, modify elements within your database through PHP.
Access cookies variables and set cookies.
Using PHP, you can restrict users to access some pages of your website.
It can encrypt data
PHP SCRIPTING
PHP tutorial for beginners and professionals provides in-depth knowledge of PHP scripting language. Our PHP
tutorial will help you to learn PHP scripting language easily.
This PHP tutorial covers all the topics of PHP such as introduction, control statements, functions, array, string,
file handling, form handling, regular expression, date and time, object-oriented programming in PHP, math,
PHP MySQL, PHP with Ajax, PHP with jQuery and PHP with XML.
o PHP stands for Hypertext Preprocessor.
o PHP is an interpreted language, i.e., there is no need for compilation.
o PHP is a server-side scripting language.
o PHP is faster than other scripting languages, for example, ASP and JSP.
PHP Example
In this tutorial, you will get a lot of PHP examples to understand the topic well. You must save the PHP file
with a .php extension. Let's see a simple PHP example.
File: hello.php 1. <!DOCTYPE> 2. <html> 3. <body> 4. <?php 5. echo "<h2>Hello by PHP</h2>"; 6. ?> 7. </body> 8. </html>
Web Development
PHP is widely used in web development nowadays. PHP can develop dynamic websites easily. But you
must have the basic the knowledge of following technologies for web development as well.
o HTML
o CSS
o JavaScript
o Ajax
o XML and JSON
o jQuery
PHP Variables
A variable in PHP is a name of memory location that holds data. A variable is a temporary storage that is used
to store data temporarily.
In PHP, a variable is declared using $ sign followed by variable name.
Syntax of declaring a variable in PHP is given below:
1. $variablename=value;
SAMPLE PROGRAM
<?php
$str="hello string";
$x=200;
$y=44.6;
echo "string is: $str <br/>";
echo "integer is: $x <br/>";
echo "float is: $y <br/>";
?>
PHP Operators
PHP Operator is a symbol i.e used to perform operations on operands. For example:
1. $num=10+20;//+ is the operator and 10,20 are operands
In the above example, + is the binary + operator, 10 and 20 are operands and $num is variable.
PHP Operators can be categorized in following forms:
o Arithmetic Operators
o Comparison Operators
o Bitwise Operators
o Logical Operators
o String Operators
o Incrementing/Decrementing Operators
o Array Operators
o Type Operators
o Execution Operators
o Error Control Operators
o Assignment Operators
We can also categorize operators on behalf of operands. They can be categorized in 3 forms:
o Unary Operators: works on single operands such as ++, -- etc.
o Binary Operators: works on two operands such as binary +, -, *, / etc.
o Ternary Operators: works on three operands such as "?:".
o PHP Arrays
o PHP array is an ordered map (contains value on the basis of key). It is used to hold multiple values of
similar type in a single variable.
o
o Advantage of PHP Array
o Less Code: We don't need to define multiple variables.
o Easy to traverse: By the help of single loop, we can traverse all the elements of an
array. o Sorting: We can sort the elements of array.
There are 3 types of array in PHP.
1. Indexed Array
2. Associative Array
3. Multidimensional Array
PHP index is represented by number which starts from 0. We can store number, string and object in
the PHP array. All PHP array elements are assigned to an index number by default.
PHP Associative Array
We can associate name with each array elements in PHP using => symbol.
PHP Array Functions
PHP provides various array functions to access and manipulate the elements of array. The important
PHP array functions are given below.
PHP array() function
PHP array() function creates and returns an array. It allows you to create indexed, associative
and multidimensional arrays.
Syntax
1. array array ([ mixed $... ] )
Example
1. <?php
2. $season=array("summer","winter","spring","autumn");
3. echo "Season are: $season[0], $season[1], $season[2] and $season[3]";
4. ?>
PHP CONTROL
STATEMENTS If Else
PHP if else statement is used to test condition. There are various ways to use if statement in PHP.
o if
o if-else
o if-else-if
o nested if
PHP If Statement
PHP if statement is executed if condition is true.
Syntax
1. if(condition){ 2. //code to be executed 3. }
PHP Switch
PHP switch statement is used to execute one statement from multiple conditions. It works like PHP if-else-
if statement.
Syntax
1. switch(expression){
2. case value1: 3. //code to be executed 4. break; 5. case value2: 6. //code to be executed 7. break; 8. ...... 9. default: 10. code to be executed if all cases are not matched; 11. }
PHP For Loop
PHP for loop can be used to traverse set of code for the specified number of times.
It should be used if number of iteration is known otherwise use while loop.
Syntax
1. for(initialization; condition; increment/decrement){ 2. //code to be executed 3. }
Database Access with PHP
PHP 5 and later can work with a MySQL database using:
MySQLi extension (the "i" stands for improved)
PDO (PHP Data Objects)
Earlier versions of PHP used the MySQL extension. However, this extension was
deprecated in 2012.
<?php $servername = "localhost";
$username = "username"; $password = "password";
// Create connection $conn = new mysqli($servername, $username, $password);
// Check connection if ($conn->connect_error) {
die("Connection failed: " . $conn->connect_error); } echo "Connected successfully"; ?>
Select Data From a MySQL Database
The SELECT statement is used to select data from one or more tables:
SELECT column_name(s) FROM table_name
or we can use the * character to select ALL columns from a table:
SELECT * FROM table_name
To learn more about SQL, please visit our SQL tutorial.
Select Data With MySQLi
The following example selects the id, firstname and lastname columns from the MyGuests
table and displays it on the page:
Example (MySQLi Object-oriented)
<?php
$servername = "localhost";
$username = "username";
$password = "password";
$dbname = "myDB";
// Create connection
$conn = new mysqli($servername, $username, $password,
$dbname); // Check connection if ($conn->connect_error) {
die("Connection failed: " . $conn->connect_error);
}
$sql = "SELECT id, firstname, lastname FROM
MyGuests"; $result = $conn->query($sql);
if ($result->num_rows > 0) {
// output data of each row
while($row = $result->fetch_assoc()) {
echo "id: " . $row["id"]. " - Name: " . $row["firstname"]. " " . $row["lastname"]. "<br>";
}
} else {
echo "0 results";
}
$conn->close();
?>
Run example »
Code lines to explain from the example above:
First, we set up an SQL query that selects the id, firstname and lastname columns from the
MyGuests table. The next line of code runs the query and puts the resulting data into a
variable called $result.
Then, the function num_rows() checks if there are more than zero rows returned.
If there are more than zero rows returned, the function fetch_assoc() puts all the results into an
associative array that we can loop through. The while() loop loops through the result set and
outputs the data from the id, firstname and lastname columns.
The following example shows the same as the example above, in the MySQLi procedural way:
Example (MySQLi Procedural)
<?php
$servername = "localhost";
$username = "username";
$password = "password";
$dbname = "myDB";
// Create connection
$conn = mysqli_connect($servername, $username, $password, $dbname);
// Check connection
if (!$conn) {
die("Connection failed: " . mysqli_connect_error()); }
$sql = "SELECT id, firstname, lastname FROM
MyGuests"; $result = mysqli_query($conn, $sql);
if (mysqli_num_rows($result) > 0) {
// output data of each row
while($row = mysqli_fetch_assoc($result)) {
echo "id: " . $row["id"]. " - Name: " . $row["firstname"]. " " . $row["lastname"]. "<br>";
}
} else {
echo "0 results";
}
mysqli_close($conn);
?>
Run example »
MySQL UPDATE statement
We use the UPDATE statement to update existing data in a table. We can use
the UPDATEstatement to change column values of a single row, a group of rows, or all rows in
a table. The following illustrates the syntax of the MySQL UPDATE statement: 1 UPDATE [LOW_PRIORITY] [IGNORE] table_name 2 SET 3 column_name1 = expr1, 4 column_name2 = expr2, 5 ... 6 WHERE 7 condition;
In the UPDATE statement:
First, specify the table name that you want to update data after the UPDATEkeyword.
Second, the SET clause specifies which column that you want to modify and the new
values. To update multiple columns, you use a list comma-separated assignments. You
supply the value in each column’s assignment in the form of a literal value, an expression,
or a subquery.
Third, specify which rows to be updated using a condition in the WHERE clause.
The WHERE clause is optional. If you omit the WHERE clause, the UPDATE statement
will update all rows in the table.
Notice that the WHERE clause is so important that you should not forget. Sometimes, you
may want to change just one row; However, you may forget the WHERE clause and
accidentally updates all the rows in the table. MySQL supports two modifiers in the UPDATE statement.
1. The LOW_PRIORITY modifier instructs the UPDATE statement to delay the update
until there is no connection reading data from the table. The LOW_PRIORITY takes
effect for the storage engines that use table-level locking only, for example, MyISAM,
MERGE, MEMORY.
2. The IGNORE modifier enables the UPDATE statement to continue updating rows even if
errors occurred. The rows that cause errors such as duplicate-key conflicts are not updated.
MySQL DELETE statement
If you want to delete a record from any MySQL table, then you can use the SQL command
DELETE FROM. You can use this command at the mysql> prompt as well as in any script
like PHP.
Syntax
The following code block has a generic SQL syntax of the DELETE command to delete data
from a MySQL table. DELETE FROM table_name [WHERE Clause]
If the WHERE clause is not specified, then all the records will be deleted from the given MySQL
table.
You can specify any condition using the WHERE clause.
You can delete records in a single table at a time.
The WHERE clause is very useful when you want to delete selected rows in a table.
Deleting Data from the Command Prompt
This will use the SQL DELETE command with the WHERE clause to delete selected data
into the MySQL table – tutorials_tbl.
root@host# mysql -u root -p password;
Enter password:*******
mysql> use TUTORIALS;
Database changed
mysql> DELETE FROM tutorials_tbl WHERE tutorial_id=3;
Query OK, 1 row affected (0.23 sec)
mysql>
Deleting Data Using a PHP Script
You can use the SQL DELETE command with or without the WHERE CLAUSE into the
PHP function – mysql_query(). This function will execute the SQL command in the same
way as it is executed at the mysql> prompt.
Example
Try the following example to delete a record from the tutorial_tbl whose tutorial_id is 3.
<?php
$dbhost = 'localhost:3036';
$dbuser = 'root';
$dbpass = 'rootpassword';
$conn = mysql_connect($dbhost, $dbuser, $dbpass);
if(! $conn ) {
die('Could not connect: ' . mysql_error());
}
$sql = 'DELETE FROM tutorials_tbl WHERE tutorial_id = 3';
mysql_select_db('TUTORIALS');
$retval = mysql_query( $sql, $conn );
if(! $retval ) {
die('Could not delete data: ' . mysql_error());
}
echo "Deleted data successfully\n";
mysql_close($conn);