design engineering · 2018. 2. 28. · design engineering 1 fault analysis of the wear fault...
Post on 30-Dec-2020
1 Views
Preview:
TRANSCRIPT
INDEX
Sr.
No. Article Name Author
Page
No
DESIGN ENGINEERING
1 Fault analysis of the wear fault development in rolling bearings S.H.Kalaskar 01
2 Impact of Mesh Quality Parameters on Elements Such As Beam,
Shell And 3D Solid In Structural Analysis M.M.Jadhav 04
3 A study of approaches of vibration analysis of faults in rolling
element bearing on rotating machinery A.A.Shinde 07
4 Vibration damper in transmission line A.D.Apte 11
MATERIALS AND MANUFACTURING
5 Green Composites- A step towards Green Engineering Dr.S.S.Ahankari 14
6 Additive manufacturing Y.P.Ballal 21
7 Optical CMM G.M.Chendake 23
8 Review on 3D printing V.S.Ganachari 26
9 SAP- Bright opportunity for carrier M.A.Sutar 37
10 Nitinol (NITI) shape memory alloys K.I.Nargatti 41
ENERGY AND POWER
11 Phase Change Materials P.D.Kulkarni 44
12 Concentrated solar power plants S.C.Naik 53
13 Solar aircraft: future need S.S.Chavan 58
14 Solar impulse 2 aviation future S.D.Patil 60
15 Solar operated power bank S.S.Shirguppikar 63
16 A review on investigation of flow through centrifugal pump
impellers using computational fluid dynamics R.R.Gaji 69
17 Cooling rate enhancement by magnetic nano fluid P.B.Patil 71
18 Cryo desalination : A refrigeration solution for thirsty world J.S.Jadhav 73
19 Enhancement of heat transfer rate from notched fins by
considering aspect ratio P.V.Mali 75
20 Heat pipes in solar collectors : overview and basics S.A.Urunkar 78
21 Hydrogen fuelled I.C. engine an overview V.S.Jadhav 82
22 Recent advances in heat transfer enhancement A.R.Mane 86
23 Enhancement of vortex cooling capacity by reducing hot tube
surface temperature S.V.Yadav 90
INTERDISCIPLINARY
24 Photoelasticity A.V.Patil 93
25 An integrated approach for reliability analysis of resilient system R.B.Patil 97
26 How Japan replaced half its nuclear capacity with efficiency P.M.Wadekar 100
27 Study of dielectric fluid flow in micro electro discharge milling
process using CFD method S.A.Mullya 104
28 Low cost automation an insight S.V.Patil 108
29 ASTROSAT –India’s first multi wavelength astronomy satellite K.J.Burle 111
30 Bullet Train M.M.Salgar 114
DESIGN ENGINEERING
INFLUENCE December 2015
Department of Mechanical Engineering Page | 1
1.1.1.1. Fault analysis of the wear fault development in Fault analysis of the wear fault development in Fault analysis of the wear fault development in Fault analysis of the wear fault development in
rolling bearingsrolling bearingsrolling bearingsrolling bearings
By S.H.Kalaskar
Ball passing frequencies are very important features for condition monitoring and fault
diagnosis of rolling ball bearings. The ball passing frequency on outer raceway and the ball
passing frequency on inner raceway are usually calculated by two well-known kinematics
equations.
A novel method for accurately calculating BPFs based on a complete dynamic model of
rolling ball bearings with localized surface defects is proposed. In the used dynamic model,
three-dimensional motions, relative slippage, cage effects and localized surface defects are all
considered. Moreover, localized surface defects are modeled accurately with consideration of
the finite size of the ball, the additional clearance due to material absence, and changes of
contact force directions. The reasonability of the proposed method for the prediction of dynamic
behaviors of actual ball bearings with localized surface defects. Parametric investigation shows
that the shaft speed, external loads, the friction coefficient, raceway groove curvature factors, the
initial contact angle, and defect sizes have great effects on BPFs. For a loaded ball bearing, the
combination of rolling and sliding in contact region occurs, and the BPFs calculated by simple
kinematical relationships are inaccurate, especially for high speed, low external load, and large
initial contact angle conditions where severe skidding occurs.
Rolling element bearings are widely used in aero-engines, high-speed spindles and other
rotational machinery. Because of material failure and adverse operating conditions, bearing
faults often occur during operations, which may lead the whole system to failure. Therefore,
condition monitoring and fault diagnosis of rolling element bearings are crucial for the
prevention of system failure.
In practice, vibration signals are widely used for the fault diagnosis of rolling element
bearings. Fault features are extracted from measured vibration signals with the aid of advanced
signal processing techniques. In recent decades, many advanced signal processing techniques
have been proposed for extracting bearing fault features from measured vibration signals
The rolling element bearing is one of the most critical components that determine the
machinery health and its remaining lifetime in modern production machinery. Robust condition
monitoring tools are needed to guarantee the healthy state of REBs during the operation. CM
tools indicate the upcoming failures which provide more time for maintenance planning. CM
tools aim to monitor the deterioration i.e., wear evolution rather than just to detect the defects.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 2
Signal processing methods are required to extract the defect features. Over the years, several
methods have been developed to extract the defect features from the raw signals.
The current signal processing methods try to overcome several challenges [1]: (1)
remove the speed fluctuation; (2) remove the noise effect; (3) remove the smearing effect of
transfer path; (4) select optimal band of high Signal-to-Noise ratio; and (5) extract clear defect
features. The order tracking methods are used to avoid the smearing of discrete frequency
components due to speed fluctuations. To handle the smearing effect of transfer path, the
minimum entropy de-convolution method has been developed. For background noise problem,
different de-noising filters have been developed such as discrete/random separation, adaptive
noise cancellation, self-adaptive noise cancellation or linear prediction methods to remove the
noise background. Different feature extraction methods have been conducted to study specific
monitoring techniques such as vibration, acoustic emission, oil-debris, ultrasound, electrostatic,
shock-pulse measurements and their use in faulty detection. Many studies have used simple
signal/data processing techniques such as root mean square, kurtosis and FFT. However, the
largest share of studies has focused to develop new techniques: envelope, wavelets, data-driven
methods, expert systems, fuzzy logic techniques, etc.
However, the development of physical defect i.e., wear is an evolution process taking
place over a specific time interval within the lifetime. Therefore, the sixth challenge is to track
the defect features over the whole lifetime in order to diagnose the bearing health. The tracking
methods utilize the previous feature extraction methods to track the extracted feature(s) over the
whole lifetime. Therefore, the most important issue of tracking technique is the reliability of the
signal analysis method and how effective indication it can provide. The detection of defect
propagation has been studied in [2], [3], [4] and [5]. First, it was observed that increasing the
defect length increases the burst duration. Second, it was observed also that increasing the defect
width increases the ratio of burst amplitude. In fact, the signal analysis methods are verified by
introducing a virtual defect into the dynamic bearing model and validated by artificially
introduced defect in test rigs. Then, it is assumed that increasing the defect size gradually is
correlated with the defect severity (i.e., wear evolution). For example, Nakhaeinejad[6]utilised
the bond graphs to study the effects of defects on bearing vibration. A localised fault has been
introduced into the dynamic model with different defect size to represent the development of
defect severity. It is clear that this kind of approach assumes a linear relationship between the
defect size and the obtained impact response.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 3
References:
1. N. Sawalhi, “Diagnostics, Prognostics and Fault Simulation for Rolling Element
Bearings” UNSW, Sydney (2007)
2. T. Yoshioka, T. Fujiwara, “Measurement of propagation initiation and propagation time
of rolling contact fatigue cracks by observation of acoustic emission and vibration”
Tribiology. Ser., 12 (1987), pp. 29–33
3. H. Kakishima, T. Nagatomo, H. Ikeda, T. Yoshioka, A. Korenaga, “ Measurement of
acoustic emission and vibration of rolling bearings with an artificial defect QR RTRI”, 41
(3) (2000), pp. 127–130
4. A.M. Al-Ghamd, D. Mba, “A comparative experimental study on the use of acoustic
emission and vibration analysis for bearing defect identification and estimation of defect size
Mech. Syst. Signal Process”., 20 (Oct. 2006), pp. 1537–1571
5. Y.-H. Kim, A.C.C. Tan, J. Mathew, B.-S. Yang, “ Condition monitoring of low speed
bearings: a comparative study of the ultrasound technique versus vibration measurements
World Congress of Engineering Asset Management” (2006)
6. M. Nakhaeinejad, “ Fault Detection and Model-Based Diagnostics in Nonlinear
Dynamic Systems” University of Texas, Austin (2010).
INFLUENCE December 2015
Department of Mechanical Engineering Page | 4
2.2.2.2. Impact of Mesh Quality Parameters on Impact of Mesh Quality Parameters on Impact of Mesh Quality Parameters on Impact of Mesh Quality Parameters on Elements Such Elements Such Elements Such Elements Such
As Beam, Shell And 3D Solid In Structural AnalysisAs Beam, Shell And 3D Solid In Structural AnalysisAs Beam, Shell And 3D Solid In Structural AnalysisAs Beam, Shell And 3D Solid In Structural Analysis
By M.M.Jadhav
Finite element analysis has reached a state of maturity in which 3-D applications are
commonplace. Most analysts, as well as most commercial codes use solid elements based on the
iso-parametric formulation--or variations of it for 3D analyses [1-4]. For simple geometries, or
for applications in which it is possible to build a mesh "by hand", analysts have relied heavily on
the 8-node hexahedral element commonly known as "brick" or "hexa" [5]. For more complex
geometries, however, the analyst must rely on automatic (or semi-automatic) mesh generators.
In general, automatic mesh generators produce meshes made of tetrahedral elements, rather than
hexahedral elements. The reason is that a general 3-D domain cannot always be decomposed
into an assembly of bricks. However, it can always be represented as a collection of tetrahedral
elements. As the demand for analysis of more complex configurations has grown, coupled with
the increasing popularity of automatic mesh generators, the need to understand the relative
merits of tetrahedral and hexahedral elements has become apparent. It is known, for example,
that linear tetrahedral elements do not perform very well--as expected because they are constant-
strain elements; thus, too many elements are required to achieve a satisfactory accuracy.
What remains unclear, however, is whether brick elements perform better or worse than
quadratic tetrahedral, that is, tetrahedral elements including mid-side nodes. Specifically, for a
given number of nodes (or degrees of freedom), the analyst needs to know under what
circumstances it is better to use bricks instead of quadratic tetrahedral. This amounts to
investigating the accuracy and efficiency of such elements under a variety of problems
characterized by different deformation patterns, such as, bending, shear, torsion and axial
behaviour. In addition, if a mesh made of linear tetrahedral elements does not yield a result
within acceptable error, it is useful to know what strategy to follow: (a) decrease the size of the
elements while keeping them linear, or (b) make the elements quadratic by introducing
additional (mid-side) nodes.
With modern finite element tools it is not difficult to represent results as color pictures.
However, the correctness of the results are actually the cornerstone of the simulation. The
correctness of the numerical results crucially depends on the element quality itself. There are no
general rules which can be applied just to decide which element shape should be preferred but
there do exists some basic principles and also certain experiences from applications which can
be very helpful in avoiding simulation errors and in judging the validity of the results.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 5
It is important to know that for bending dominated problems only linear hexahedral
elements lead to good results if extra shape functions or enhanced strain formulations are used.
Linear tetrahedrons tend to be too stiff in bending problems. By increasing the number of the
elements in depth the structure is still too stiff. The quadratic mid-side node tetrahedron element
shows the exact analytic solution for pure bending dominated problems even with a coarse mesh
with only one element in depth. It is obvious that using a linear tetrahedron element yields
unacceptable approximations. The user should not use it for bending dominated problems. On
the other hand quadratic mid-side node tetrahedral elements are good for bending dominated
problems.
Tetrahedral and hexahedral element solution in nonlinearities:
Consider a nonlinear contact simulation where material behavior is linear and geometric
nonlinearities are ignored. The number of active degrees of freedom is exactly the same. In
addition, the element aspect ratio in both meshes is equivalent.
It can be observed that the results obtained with bricks and quadratic tetrahedral, in terms
of accuracy, are roughly equivalent. This is significant because it indicates that analysts who
rely on automatic mesh generators (which in general generate meshes made of tetrahedral
elements) do not have a disadvantage compared to those analysts who use bricks. In other
words, the tri-linear brick element—a long-time favorite of many finite element practitioners--
appears not to have a substantial advantage compared to the quadratic tetrahedron. A second
conclusion is concerned with what is the best approach to take if a model made of linear
tetrahedral does not give satisfactory results. Analysis suggest that, in general, it seems better to
increase the order of the elements rather than refining the mesh with smaller linear elements.
Quality of tetrahedral elements in thin-walled structures
It is important to investigate the quality of quadratic tetrahedral elements when used for
simulating the mechanical behavior of thin-walled structures. User can investigate the stiffness
of the plate by performing a modal analysis and compare the numerical results with the analytic
solution for the first frequencies. Because of the nature of thin-walled structure (no stiffness
normal to plane) usually Reissner-Mindlin based shell elements are used for the finite element
simulation instead of classical displacement based solid elements. The geometric modelling
effort to be able to use finite shell elements might be expensive nowadays since for shell
applications the user typically needs a mid-surface model. However, most of the CAD models
are 3D solid models and the user must work on the solid model to obtain a mid-surface model
which is usually not an easy task. For very complicated 3D solid models it is very difficult and
INFLUENCE December 2015
Department of Mechanical Engineering Page | 6
maybe even impossible to get the mid-surface in an efficient way. It follows that more and more
thin-walled 3D solid models are meshed and calculated using quadratic tetrahedral elements.
Caution must be taken in using tetrahedral elements for thin-walled structure since the structural
behaviour could be much too stiff in bending, if the element size comparing to the thickness is
not properly chosen. This also might result in numerically ill-conditioned stiffness matrices.
References:
1. B.M. Irons, "Engineering applications of numerical integration in stiffness methods", J. Am.
Inst. Aeronaut. Astronaut.4 (ll) pp. 2035-2037, 1966.
2. I.C. Taig, "Structural analysis by the matrix displacement method", English Electric
Aviation, Report SO-17, 1961.
3. J. Ergatoudis, B. Irons and O.C. Zienkiewicz, "Curved isoparametric quadrilateral elements
for finite element analysis", Int. J. Solids Struct. 4, pp. 31-42, 1968.
4. R.D. Cook, Concepts and Applications of Finite Element Analysis, Wiley, New York, 2nd
edn., Chap. 5, 1981.
5. T.J.R. Hughes, The Finite Element Method, Prentice-Hall, Englewood Cliffs, N J, Section
3.5, 1987.
6. R.L. Taylor, J.C. Simo, O.C. Zienkiew1cz and A.C. Chan, "The patch test: A condition for
assessing finite element convergence", Int. J. Numer. Methods Eng. 22 (l) pp. 39-62, 1986.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 7
3.3.3.3. A Study of Approaches of Vibration Analysis of Faults A Study of Approaches of Vibration Analysis of Faults A Study of Approaches of Vibration Analysis of Faults A Study of Approaches of Vibration Analysis of Faults
in Rolling Element Bearing on Rotating Machineryin Rolling Element Bearing on Rotating Machineryin Rolling Element Bearing on Rotating Machineryin Rolling Element Bearing on Rotating Machinery
By - A. A. Shinde
1. Introduction
Rolling element bearings are used in a wide variety of rotating machines from small hand-
held devices to heavy duty industrial systems and are primary cause of breakdowns in these
machines. In the rotating machines, one of the major concerns is fatigue failure in rolling
element bearing components. When the repeatedly cycled stress on a surface in rolling contact it
exceeds the endurance strength of the material, fatigue cracking of the surface occurs. This
defect propagates and a result in a large pits or spalls on the surface of bearing components. A
small fault in the bearing systems can quickly develop into a dangerous failure mode without
any notable signs. Therefore, accurate machinery fault diagnosis is becoming of paramount
importance to avoid catastrophic failure and human casualties.
The several vibration and acoustic measurement methods have been used for the detection of
defects in rolling element bearings. The vibration signals contain information of defective parts
and a variety of vibration based techniques have been developed to monitor the condition of
bearing. Vibration signals are analyzed using frequency domain analysis. Vibration signal is
measured from the FFT signal processing techniques, are employed to extract the fault sensitive
features to serve as the monitoring indices [4].
Theoretical predictions based on experimental observations mark the essence of useful
research. Proper use of statistical methods greatly improves the efficiency of the experiments
and helps to draw meaningful conclusions from the experimental data. There are two basic
aspects of concern in scientific experimentation the design of the experiment and the statistical
analysis of the data. Successful experimentation requires knowledge of the important factors that
influence the output. Design of experiments (DOE) helps to determine the factors, which are
important for explaining a process variation. Interactions are the driving forces in many
processes and proper understanding of the process may be difficult or impossible if important
interactions remain undetected. DOE also helps to understand how the influence factors interact
with the system. Methods such as factorial design, Response Surface Method (RSM) [3]
The present study, identify the effects of localized defect on amplitude of vibration using
Response surface method (RSM). The combination effects of these defects on the double row
ball bearings are also focused which are not done in most of the studies. This work attempts to
analyze vibration responses of a rotor-bearing system supported on double row ball bearings.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 8
Experimental studies of a rotor bearing system are carried out to obtain the vibration response
due to localized defects, on outer race, inner race and roller/ Ball.
Jose Mathew et al. [2] presented an analytical model for predicting the effect of a localized
defect on the ball bearing vibrations. The contact force was calculated using the Hertzian contact
deformation theory. Patil M.S. et al. [3] studied that the presence of defect in the bearing results
in increased vibrations. Time domain indices such as rms, crest factor, and kurtosis were some
of the important parameters used to monitor the condition of the bearing with RSM he has
developed the model with Kurtosis. Choudhury et al. [5] proposed rotor-bearing system model
which predicts significant components at the harmonics of characteristic defect frequency for a
defect on the particular bearing element. Gallina et al. [6] explained response surface method
(RSM) is utilized to analyze the effects of design and operating parameters on the vibration
signature of a Rotor bearing system. Distributed defects are considered such as internal radial
clearance and surface waviness of the bearing components.
2. Bearing Fault Detection Methods
The approaches used to study the effect of localized defects, such as cracks, pits, dents, etc., are
Run the bearing until failure and monitor the changes in their vibration response.
Intentionally introduce the defect in the bearing by techniques such as spark erosion,
scratches, and indentation and measure the vibration response. Compare the response
with the defect-free bearing.
Examination of the shock waves generated in the bearing housing when the rolling
element moves over a damaged area,
Statistical analysis applied to time signal.
2.1 Vibration Measurement Technique:
Vibration produced due to defects is sensed using the velocity transducer or
accelerometers. Recently, laser based noncontact velocity transducers have become more
common place. The signals collected from the rolling contact element can be analyzed in the
following ways:
Overall amplitude of the vibration level based on time domain data
Transformation of the time signal into the frequency domain.
2.1.1Time Domain Approach:
Time domain refers to a display or analysis of the vibration data as a function of time.
The principal advantage of the format is that little or no data are lost prior to inspection.
However, the disadvantage is that there is often too much data for easy and clear fault diagnosis.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 9
Time wave form analysis includes the visual inspection of the time history of the vibration
signals, time wave form indices, probability density function, and probability density moments.
A time wave form index is a single number calculated based on the raw vibration signal and
used for trending and comparisons. The indices include peak value, mean value, rms value, and
peak-to peak amplitude. This method has been applied with limited success for the detection of
the defects
2.1.2 Frequency-Domain Approach:
The frequency domain refers to the display or analysis of the vibration data based on the
frequency. The time domain vibration signal is typically processed into the frequency domain by
the application of Fourier transform, usually in the form of fast Fourier transform (FFT)
algorithm. The principal advantage of the method is that the repetitive nature of the vibration
signals is clearly displaced as peaks in the frequency spectrum at the frequency where the
repetition takes place. The interaction of the defect in the rolling element bearings produces
pulses of very short duration, whereas the defect strikes the rotation motion of the system. These
pulses excite the natural frequency of the bearing elements, resulting in the increase in the
vibration energy at these high frequencies. The resonant frequencies can be calculated
theoretically. Each bearing element has a characteristic rotational frequency. With a defect on a
particular bearing element, an increase in the vibration energy at this element rotational
frequency may occur. This defect frequency can be calculated from the geometry of the bearing
and element rotational speed.
3. Conclusion
This article has focuses on to vibration signature analysis used for bearing fault
detection. Various methods used for bearing fault detection. Out of which it gives introductory
information about time domain approach and frequency domain approach.
References
[1] N. Tandon, A. Choudhury., 1999, “A review of vibration and acoustic measurement
methods for the detection of defects in rolling element bearings”, Tribology International,
Vol. 32, pp. 469–480.
[2] Jose Mathew, M. S. Patil, P.K. Rajendrakumar, Sandeep Desai., 2010, “A theoretical
model to predict the effect of localized defect on vibrations associated with ball bearing”,
International Journal of Mechanical Sciences, Vol. 52 , pp. 1193–1201.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 10
[3] M. S. Patil, Jose Mathew, P.K. Rajendrakumar, SumitKarade., 2010, “Experimental
Studies Using Response Surface Methodology for Condition Monitoring of Ball Bearings”,
Journal of Tribology, Vol. 132, 044505-1-044505-7,ASME.
[4] ZekiKıral, HiraKaragulle., 2006, “Vibration analysis of rolling element bearings with
various defects under the action of an unbalanced force”, Mechanical Systems and Signal
Processing, Vol. 20, pp. 1967-1991.
[5] A. Choudhury, N. Tandon., 2006, “A theoretical model has been developed to obtain the
vibration response due to a localized defect in various bearing elements in a rotor-bearing
system under radial load conditions”, Journal of Tribology, Vol. 128, pp. 252-261.
[6] Gallina A, Martowicz, A, Uhl T., 2006, “An Application of Response Surface Methodology
in the Field of Dynamic Analysis of Mechanical Structures Considering Uncertain
Parameters”, ISMA 2006 Conference, Leuven, Belgium.
[7] Jun M., 1995, “Detection of Localised Defects in Rolling Element Bearings via Composite
Hypothesis Test”, Mechanical Systems and Signal Processing, Vol. 9(1), pp 63-75.
[8] M. S. Patil, Jose Mathew, P.K. Rajendrakumar, 2008, “Bearing Signature Analysis as a
Medium for Fault Detection: A Review”, Journal of Tribology, Vol. 130, pp. 014001-1-7.
[9] Roller Bearing Catalogue, October 2012, SKF Group
INFLUENCE December 2015
Department of Mechanical Engineering Page | 11
4.4.4.4. VIBRATION DAMPER IN TRANSMISSION LINEVIBRATION DAMPER IN TRANSMISSION LINEVIBRATION DAMPER IN TRANSMISSION LINEVIBRATION DAMPER IN TRANSMISSION LINE
By A.D.Apte
The conductor vibration results in cyclic bending of the conductor near hardware
attachments, such as suspension clamps and consequently causes conductor fatigue and
strand breakage.
When a “smooth” stream of air passes across a cylindrical shape, such as a conductor ,
vortices (eddies) are formed on the back side. These vortices alternate from the top and
bottom surfaces, and create alternating pressures that tend to produce movement at right
angles to the direction of the air flow. This is the mechanism that causes Aeolian vibration.
The term “smooth” was used in the above description because unsmooth air (i.e., air with
turbulence) will not generate the vortices and associated pressures. The degree of turbulence
in the wind is affected both by the terrain over which it passes and the wind velocity itself.
It is for these reasons that Aeolian vibration is generally produced by wind velocities below
15 miles per hour (MPH). Winds higher than 15 MPH usually contain a considerable amount
of turbulence, except for special cases such as open bodies of water or canyons where the
effect of the terrain is minimal.
Vortex Frequency (Hertz) = 3.26 V / d
Where: V is the wind velocity component normal to the conductor in miles per hour
d is the conductor diameter in inches
3.26 is an empirical aerodynamic constant.
One thing that is clear from the above equation is that the frequency at which the vortices
alternate is inversely proportional to the diameter of the conductor
Aeolian vibrations mostly occur at steady wind velocities from 1 to 7 m/s. With increasing
wind turbulence the wind power input to the conductor will decrease. The intensity to induce
vibrations depends on several parameters such as type of conductors and clamps, tension,
span length, topography in the surrounding, height and direction of the line as well as the
frequency of occurrence of the vibration induced wind streams.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 12
Hence the smaller the conductor, the higher the frequency ranges of vibration of the
conductor. The vibration damper should meet the requirement of frequency or wind velocity
range and also have mechanical impedance closely matched to that of the conductor. The
vibration dampers also need to be installed at suitable positions to ensure effectiveness
across the frequency range.
Effect of Aeolian Vibration:
Abrasion is the wearing away of the surface of a conductor and is generally associated with
loose connections between the conductor and attachment hardware or other conductor
fittings.
Abrasion damage can occur within the span itself at spacers Fatigue failures are the direct
result of bending a material back and forth a sufficient amount over a sufficient number of
cycles.
In the case of a conductor being subjected to Aeolian vibration, the maximum bending
stresses occur at locations where the conductor is being restrained from movement. Such
restraint can occur in the span at the edge of clamps of spacers, spacer dampers and Stock.
When the bending stresses in a conductor due to Aeolian vibration exceed the endurance
limit, fatigue failures will occur.
In a circular cross-section, such as a conductor the bending stress is zero at the center and
increases to the maximum at the top and bottom surfaces (assuming the bending is about the
horizontal axis). This means that the strands in the outer layer will be subjected to the
highest level of bending stress and will logically be the first to fail in fatigue.
Working of Vibration Damper
When the damper is placed on a vibrating conductor, movement of the weights will produce
bending of the steel strand. The bending of the strand causes the individual wires of the
strand to rub together, thus dissipating energy. The size and shape of the weights and the
overall geometry of the damper influence the amount of energy that will be dissipated for
specific vibration frequencies.
Since, as presented earlier, a span of tensioned conductor will vibrate at a number of
different resonant frequencies under the influence of a range of wind velocities, an effective
damper design must have the proper response over the range of frequencies expected for a
specific conductor and span parameters.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 13
References:
1. Sarah Chao Sun. Dulhunty Power (Aust.). Australia
2. Joe Yung. Dulhunty Yangzhou Line Fittings, Canada.
MATERIALS AND MANUFACTURING
INFLUENCE December 2015
Department of Mechanical Engineering Page | 14
5.5.5.5. Green CompositesGreen CompositesGreen CompositesGreen Composites---- A step towards Green EngineeringA step towards Green EngineeringA step towards Green EngineeringA step towards Green Engineering
By Dr.S.S.Ahankari
Abstract:
Recently natural fiber-polypropylene (PP) bio-composites have gained commercial attraction as
natural fibers contribute to the potential benefits such as biodegradability, regenerability,
reduced density and cost of the composite, process-friendly. These composites are not
sufficiently eco-friendly due to petroleum source and non-biodegradable nature of the polymer
matrix. Eco-friendly green composites manufactured from plant derived fibers and crop-
derived plastics not only mitigate the growing environmental threat but also provide a possible
solution to the uncertainty of petroleum supply. These bio- based green composites are
recyclable, environmentally acceptable, and can be sustainable if they make themselves
commercially viable with petro-based composites This paper discusses some green composites
and their attributes. These materials have found use in automotive, computer hardware,
packaging, sporting equipments and some other non-critical sectors. Green composites
contribute a lot to bring down the environmental pollution that is caused due to
traditional petro-based polymers.
Keywords: A green composite, attributes, applications.
I] Introduction
The polymer industry is worldwide one of the most innovative sectors. The secret of its
success lies in the polymers themselves, which leaves almost nothing to be desired- even in the
view of our society’s globally important tasks. Population growth, increasing energy demands,
scarcity of resources, climate change- mankind faces challenges of previously unknown
dimensions. “This requires creative minds and equally materials to realize necessary
innovations.” This is what the polymers have contributed.
Last decade, approximately 280 million tons of plastics were produced, which was just 1
million in 1950. Part of this development is the fascinating possibility to turn ideas into reality,
tangible innovations with the help of plastics. The applications and possibilities are numerous
and varied, simply to extensive to be covered fully and in detail in a few lines.
But, most polymers, including poly- ethene and poly-propylene are not biodegradable.
This means that micro- organisms cannot break them down, so they may last for millions of
years in rubbish dumps. However, it is possible to include chemicals that cause the polymer to
break down more quickly. Also, polymers can be burnt or incinerated. They release a lot of heat
INFLUENCE December 2015
Department of Mechanical Engineering Page | 15
energy when they burn, also carbon dioxide is produced, which adds to global warming. Toxic
gases are produced unless the polymers are incinerated at high temperatures.
As global societies continue to grow, increasing emphasis is being placed on ensuring
the sustainability of our material systems. Topics such as greenhouse gas emissions, embodied
energy, toxicity and resource depletion are being consideredincreasingly by material producers
[i]. One existing class of materials with good environmental credentials are green composites.
Green composites are defined, as biopolymers (bio-derived polymers) reinforced with natural
fibers. More specifically, this paper will only look at the subset of green composites that are
commonly considered as being biodegradable [ii].
II] Need of Green Composites.
Ecological concerns have resulted in renewed interest in natural materials. Recyclability
and environmental safety are becoming increasingly important to the introduction of materials
and products. Petroleum based products such as resins inthermo-set plastics, are toxic and non-
biodegradable. The resins and fibers used in the green composites are biodegradable, when they
dumped, decomposed by the action of microorganisms [iii]. They are converted into the form of
H2O and CO2. These H2O and CO2 are absorbed into the plant systems. Green composite
combines plant fibers with natural resins to create natural composite.
Green composites are a specific class of composites, where at least one of the
components (such as the matrix or the reinforcement) is obtained from natural resources. The
term green composites, bio- composites and eco composites all broadly refer to the same class of
materials [iv]. In ancient Egypt, 3000 years ago, people used straw as the reinforcing component
for the mud based wall materials in houses. They made bricks of mud with straw as
reinforcement and used these bricks to build walls. Hence, these materials are well known since
ancient times, and are best substitutes for petro based polymers at least in non-critical
applications.
III] Types of Green Composites.
a. Bio-fiber plastic composites:
Poly (hydroxy acid) such as poly(glycolic acid), PGA, or poly(lactic acid), PLA, are
crystalline polymers with relatively high melting point. Recently PLA has been highlighted
because of its availability from renewable resources like corn. PLA is a hydrophobic polymer
because of the incorporation of the CH3 side groups when compared to PGA. Poly(hydroxyl
alkanoate) (PHAs), which are synthesized biochemically by microbial fermentation and which
may be produced in the future by transgenic plants, represent natural polyesters. Bacteria came
INFLUENCE December 2015
Department of Mechanical Engineering Page | 16
first and are still the only real source of these polyesters; it will still require some more years’
research until transgenic plants will be available for production. Poly(hydroxyl butyrate) (PHB)
(commercial name biopol) is a biotechnologically produced polyester that constitutes a carbon
reserve in a wide variety of bacteria and has attracted much attention as a biodegradable
thermoplastic polyester.
b. Bio-fiber soy based composite.
Soybeans provide over 60% of the fats and oils used for food and the majority of the feed
protein. Soybeans typically contain about 20% oil and 40% protein. Protein levels as high as
55% have been observed in soybeans. Soybeans consist of discrete groups of proteins
(polypeptides) that span a broad range of molecular sizes and are comprised of approximately
38% of non-polar, non reactive amino acid residues, while 58% are polar and reactive.
Modifications that take advantage of water solubility and reactivity are exploited in improving
soy protein for use in plastics and other biomaterials [v]. Dried soy plastics display an extremely
high modulus, 50% higher than that of currently used epoxyengineering plastics. So with proper
moisture- barrier, soy protein is a potential starting material for engineering plastics [vi].thus
must be accurately predicted. Attempts have been made to model the size, shape and tensile
properties through statistical analysis of individual fibers, but few consider modeling ofLamina
and laminates, where three dimensional.
c.Bio-fiber natural rubber composites.
The primary effects of bio-fiberreinforcement on the mechanical properties of natural rubber
composites include increased modulus, increased strength with good bonding at high fiber
concentrations, decreased elongation at failure, greatly improved creep resistance over
particulate filled rubber, increased hardness and a substantial improvement in cut, tear and
puncture resistance. Biodegradation of vulcanized rubber material is possible, although it is
difficult due to the inter linkages of the poly (14-isoprene) chains, which result in reduced water
absorption and gas permeability of the material [vii]. The reinforcement of coir fiber in natural
rubber has been extensively studied. Researchers have also investigated the reinforcement
effects of a leaf fiber, sisal fiber, in natural rubber. Attempts to incorporate oil palm fiber in
rubber matrix have also been successful. The effect of fiber concentration on the mechanical
properties of oil palm reinforced natural rubber composites has been investigated and observed
that, with increase in fiber concentration the tensile and tear strength decreases [viii].
IV] Attributes of green composites.
a. Variable fiber properties.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 17
Due to natural fibers being obtained from natural sources, they suffer from natural variability
in properties, including fiber shape, length and chemical composition. Crop variety, seed
density, soil quality, fertilization, field location, fiber location on the plant, climate and harvest
timing are all factors that induce variation in these properties Variability is a major issue to be
resolved if these materials are to be relied upon in structural applications where failure is
unacceptable anddiscontinuities such as micro beads may cause variation [ix].
b. Renewability.
The majority of traditional polymer matrix materials are derived fromnon-
renewable petroleum which is formed from biomass over the course of 106 years; yet when
consumed as plastic products or fuel, it is usually converted into CO2 within 1–10years. Thus
the use of this distinctly finite – and often volatility priced – resource is unsustainable. This is a
large part of the incentive for pursuing green composites where both the reinforcement and
matrix materials are derived from plants – usually in the time span of less than a year. Using
renewable resources in this way; whereby the rate of CO2 sequestered is balanced with the rate
of consumption, contributes significantly to developing carbon neutral materials.
c.Biodegradability.
Materials are defined as biodegradable if they degrade through the actions of living
organisms; this definition is often widened to include degradation through non-
enzymatichydrolysis. Natural fibers are inherently biodegradable, as are many polymers – the
degradation rate of which is dictated by the chemistry along its backbone. For example, poly
anhydrides and polyesters both degrade through hydrolysis but at significantly different rates;
0.1 h and 3.3 years respectively, whilst poly ethers are non biodegradable since they do not
contain a hydrolysable bond. By making copolymers from polymers with different
degradationrates, this property can be tailored to the application. Biodegradation is a desired
quality since it prevents accumulation of solid waste, which is a major consideration for
composite materials in general and hinders their use in products with a limited service life.
Green composites could allow composite materials to enter new markets such as these, as their
biodegradability offers a serious advantage over synthetic composites. For example, pure PLA
will degrade to carbon dioxide, water and methane, within 2 years, whereas petroleum-
based plastics require hundreds of years.
d. Poor durability.
As one would expect from biodegradable materials, green composites have limited
durability. Exposure to environmental conditions can lead to a rapid degradation of the material.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 18
Accurately predicting the lifetime of green composites is a major challenge to their widespread
implementation. One concerning cause of degradation in green composites is the growth of
fungus and bacteria. Stamboulis and Baillie observed fungal growth on flax fibers after just three
days exposure to moisture. In the work of Nadali, a bagasse /polypropylene composite was
exposed to rainbow fungus (Coriolusversicolor): after 4 months a 30–50%reduction in bending
strength, bending modulus and hardness was observed. In addition, significant reductions in
mechanical properties were logged: for example, flexural strength reduced by 22%.
e.Non- toxicity.
Natural fibers are generally non-toxic,providing scope for manufacturing composites with no
– or heavily reduced – human health hazards and environmental damage throughout their life
cycle (production, processing, use
and disposal). A life cycle assessment by Corbiere of bio fiber pallets compared to glass fiber
equivalents found reductions of: human toxins (43%), carcinogenic substances (63%) and heavy
metals (71%) . Fiber treatment can have an effect on the level of respiratory irritants contained
in the resulting composite.
V] Applications of green composites.
Short life applications:
Short life-spanproducts are typically thought of as those that are disposable, such as plastic
cutlery and packaging. So called commodity plastics such as polyethylene, polystyrene,
polypropylene, and polyvinyl chloride are used heavily in packaging causing several
environmental concerns due to their non-biodegradability.Bio-composites that incorporate a
biodegradable polymer comparable to the price and material performance of commodity
polymers may well be a solution to the problem. However, short life-span products are not only
those items that we consider disposable, they are also items such as consumer electronics – for
example, smart phones. With such items, changes in style and improvements in technology can
quickly lead to a product becoming obsolete, despite it still being functional. Although research
on product life-spans is limited, anecdotal evidence would suggest that as consumers become
more involved in the fast paced development of electronics, the rate of product obsolescence is
increasing; coupled to a reduction in product life spans. A study into discarded products in the
UK from 1993 to 1998 found that computers, telephones, faxes, radios, stereos, CD players,
mobile phones, pagers and toys had a mean age of six years or under.
Sporting equipments:
INFLUENCE December 2015
Department of Mechanical Engineering Page | 19
The varying properties of natural fibers and their limited absolute strength present a challenge
for their use inload bearing applications. The issues associated with maintaining this strength
when formed into a composite material are another obstacle for this type of application.
However, the weight specific properties – particularly stiffness – of these fibers is excellent, thus
there is potential and incentive for them to be used in applications where mechanical properties
are important. Sporting equipment may be a good starting point since failure (resulting from
material variation or degradation) is less likely to cause serious injury or expensive property
damage than more critical applications.
Biomedical applications:
The hydro-philicity of green composites facilitates interactions with other hydrophilic surfaces
and substances such as living cell tissue. This property of bioactivity, coupled with their
biocompatibility and biodegradability, distinguishes green composites from their synthetic
counterparts for use in biomedical applications. By opening up this field to composites,
materials can be produced which offer significant advantages over those traditionally used or
those of just a single component.
VI] Conclusion.
These green composites can find several industrial applications, although some
limitations occur, regarding mainly ductility, processability and dimensional stability. A full
biodegradability, and thus a really improved environmental impact, can be obtained only by
replacing traditional polymers (coming from non-renewable resources) with biodegradable ones.
In these cases, however, new limitations arise and current scientific investigation has been
focusing on the selection of the most suitable biodegradable matrix and the optimization of all of
the preparation and processing parameters.
Considering the commercial situation, it can be stated that the market is still in an
opening phase, therefore much can still be done in order to find new applications, improving the
properties, the appearance and the marketability of these materials. All of these issues require
significant research efforts in order to find new formulations (virgin or recycled polymers,
traditional or biodegradable polymers; type, appearance, quality and amount of the fillers),
characterize them, apply them for the most suitable applications and, in general, to refine
processing techniques. As soon as the market for these composites increases, reduction of costs
and improvement of the quality will be achieved.
References.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 20
i] Koronis G, Silva A, Fontul M, “Green composites: a review of adequate materials for
automotive applications.” Compos Part B: Eng 2013, pp 120–7.
ii] Zini E, Scandola M, “Green composites: an overview”. Polymer Compos 2011, pp 1905–15.
iii] B.C. Mitra, Defense Science Journal,“Environment Friendly Composite Materials:Bio-
composites and Green Composites”, Vol. 64, pp. 244-61
iv] Elisa Zini, MariastellaScandola, “Green composites: An overview”, Polymer Composites,
2011, Vol. 32, pp. 1905 -15.
v] Thames. S.F, “Soybean polypeptide polymers.” (1994).
vi] Wang.S, Sue, H.J, & Jane, J. “Journal of Macromolecular Science – Pure and Applied
Chemistry”, 1994, Vol. 33, pp 557.
vii] Seal, K. J. & Morton, “Chemical materials”, 1996, Biotechnology, pp 583–90.
viii] Ismail, H., Rosnah, N., &Ishiaku, “Oil palm fiber-reinforced rubber composite: Effects of
concentration and modification of fiber surface”. Polymer International, Vol. 43, pp 223–30.
ix] Van de Weyenberg I, Chi Truong T, Vangrimde B, Verpoest I. “Improving the properties of
UD flax fibre reinforced composites by applying an alkaline fibre treatment.” Compos Part A:
ApplSci Manu. 2006, pp 1368–76.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 21
6.6.6.6. Additive manufacturingAdditive manufacturingAdditive manufacturingAdditive manufacturing
By - Y.P.Ballal
Additive manufacturing (AM) encompasses multiple techniques used to build solid
parts by adding material in layers. AM applications have taken advantage of the capabilities of
rapid prototyping (RP) technologies to produced parts with customized geometries. AM has
been in use since the mid-1980s.
AM techniques have been described by different terms with slightly different meanings,
including (a) automated fabrication; (b) solid free-form fabrication; (c) direct digital
manufacturing; (d) stereolithography; (e) three-dimensional or 3D printing, and (f) rapid
prototyping. However, the term “additive manufacturing” has been adopted as the standard
terminology to refer to the entire field. Figure below presents a description of selected additive
manufacturing processes (note that the common future in all additive manufacturing approaches
is the use of a layering approach) [1].
Additive manufacturing stands in contrast to typical manufacturing processes in which material
is removed or formed. Some of the benefits of AM over other technologies are as follows [2, 3].
• AM reduces waste because it only requires the amount of material needed for producing a part
or component.
• AM permits new types of design (including complex, 3-dimensional parts) without the
imposed limitations of traditional machining.
• AM does not require investment of time and money into olds, fixtures, masks or other fixed
tooling.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 22
• AM reduces the need for large inventories because parts can be produced,just-in-time or nearly
just-in-time.
• AM allows distributed manufacturing concepts, since the components need not be produced in
a factory.
Early application areas of additive manufacturing include consumer products, medical
implants and tools, dental implants and aerospace. The United States and Europe are the current
industry leaders AM is being applied to fields such as tissue engineering and nanotechnology.
American companies have recently built the first commercial bio printer to produce human
tissues and organs .It is expected that future advancements in technology will lead to significant
price reductions that will, in turn, drive further adoptions by both businesses and end-consumers.
Some anticipated developments in the near to mid-terms include:
The expectation is that by 2030, machines will have improved to the point that they can
directly compete with traditional manufacturing technologies. In order to do so, it is anticipated
that AM techniques will be capable of assembling products by area or by volume rather than by
layering materials as is the case today, creating multiple material products quickly and at
relatively high precision.
When fully developed and scalable, AM will lead to economies of scale, thus making
mass customization and easy changes in design possible. Perhaps the “largest change in 20
years for additive technology” will be in bio fabrication. Three-dimensional cancer models and
drug-testing models may replace current animal models almost altogether. “At least the
beginnings of regenerative medicine—fabricating functional tissues and organs to repair
damage—will be possible in 20 years, if not the entire concept of living organ printing”. Patents
for many of the established additive technologies will expire over the next 5 to 10 years. China,
which is heavily investing in this technology, will almost certainly become a key player in
several market segments as patents expire [4]
References:
1.White Papers on Advanced Manufacturing Questions for PCAST (STPI, 2010)
2.Make: An American Manufacturing Movement (US Council on Competitiveness, 2011)
3.Emerging Global Trends in Advanced Manufacturing (IDA for the Office of the Director of
National Intelligence, ODNI) [Shipp et al., 2012]
4. Making Things—21st CenturyManufacturing and Design: Report of a Symposium (NAE,
2012)
INFLUENCE December 2015
Department of Mechanical Engineering Page | 23
7.7.7.7. Optical CMMOptical CMMOptical CMMOptical CMM
By - G.M.Chendke.
1. Introduction to Coordinate Measuring Machine(CMM)
For many years, the industry is using Contact coordinate measuring machines (CMMs)
for measuring the various points on the surface of a product. Contact CMMs work by having an
electrical probe touch down at various points on the surface of a product and send back
information about length and other geometrical parameters. CMMs are versatile and accurate
instruments. But they are very slow. And because current products are elaborately designed and
have complex, free-form shapes, the sparse, discrete points measured by contact CMMs are
insufficient for CAE. For this reason, in the last decade, noncontact (mostly optical) CMMs,
which are capable of measuring several million points in an instant, have been overtaking
conventional CMMs. Recently, optical CMMs began developing and maturing, to the point that
they may have become the revolutionary change the metrology market is looking for.[1]
2. The basics of Optical CMM [4,5]:
Lighting is used to illuminate the work piece or object to be measured. There are three basic
lighting schemes. Backlight is used to light the part in silhouette to measure outer profiles and
features such as through holes. Direct lighting is integrated into the optical beam path and lights
the part from a direct vertical angle. A ring light is a light source that is mounted around the
Fig.1 Portable Coordinate Measuring Machine [2] Fig. 2 Optical CMM [3]
INFLUENCE December 2015
Department of Mechanical Engineering Page | 24
Fig.3. Optical Measurement Sensor setup [5]
The optical sensor has a set of lenses through which the reflected light (or direct light in
the case of backlighting) passes. The light is focused on a camera that contains an optical chip,
or CCD. The CCD has a pixel array that is light-sensitive. The chip converts the light-intensity
value for each pixel into an electronic signal with a corresponding value for each pixel. This is
called a “gray-scale value” and is a number between 0 and 255. The software finds an edge by
determining where the gray-scale value between two neighboring pixels drastically changes. The
software takes the positions of these edges on the optical sensor and compiles that with the XYZ
position of the measuring machine to provide the coordinate locations. An edge-finder system
links a few of these boundary points together to create edges or contours of the part features.
Measurements are made in the Z axis using the contrast or “autofocus” method. An
automatic cycle will move the Z axis of the machine and thus the focal point of the optics
through the surface of the part. Numerous images are taken and evaluated for contrast. The
image has the best contrast when it is exactly in the focal plane. By selecting the image with the
best contrast and knowing the Z position of the axis for that image and the calibrated focal point
distance of the optics, the height of the part at any XY position can be calculated.
3. Applications[4,5]:
i. Optical measurement works very well for flat parts that can be measured in silhouette.
These are difficult to measure with touch probes because little contact area is available
on the sides of flat parts. A distinct jump in the gray-scale value makes it easy for the
software to determine an edge.
ii. In the case of 2-D profiles such as cross-sections of extrusions. 3-D parts with small
features, especially where tight tolerances are involved, are good candidates for optical
measurement since contact in tight places is not required with optics. Likewise, rubber or
plastic parts that are easily deflected and distorted are best measured with noncontact
optics.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 25
References:
1. Research Paper :How to select a noncontact coordinate measuring machine by
Toshiyuki Takatsuji,SPIE Newsroom 10.1117/2.1200812.1384
2. Pdf -How to understand calibration and performance of portable CMM arms. By
William Plutnick, Swiss Metrology (howtoexplainportablecmmaccuracy-150914211231-
lva1-app6892.pdf)
3. www.3dscanningservices.net/ndi-reverse-engineering-3d-scanning.asp
4. www.qualitydigest.com/inside/twitter-ed/optical-cmms-future-cmms.html
5. www.qualitydigest.com/magazine/2009/apr/article/understanding-optical-
measurement.html
INFLUENCE December 2015
Department of Mechanical Engineering Page | 26
8.8.8.8. REVIEW ON 3D PRINTINGREVIEW ON 3D PRINTINGREVIEW ON 3D PRINTINGREVIEW ON 3D PRINTING
By - V.S.Ganachari
Abstract
The term 3D printing covers a host of processes and technologies that offer a full spectrum of
capabilities for the production of parts and products in different materials.3D printing is a form
of additive manufacturing technology where a three dimensional object is created by laying
down succesjksive layers of material. We will see the different types of 3D printing working
principle, methods, application, benifits and modification.
Keywords- 3 printing setup, prototype
I. INTRODUCTION
3D printingis a form of additive manufacturing technology where a three dimensional object
is created by laying down successive layers of material. It is also known as rapid prototyping, is
a mechanized method whereby 3D objects are quickly made on a reasonably sized machine
connected to a computer containing blueprints for the object. The 3D printing concept of custom
manufacturing is exciting to nearly everyone. This revolutionary method for creating 3D models
with the use of inkjet technology saves time and cost by eliminating the need to design; print and
glue together separate model parts. Now, you can create a complete model in a single process
using 3D printing. The basic principles include materials cartridges, flexibility of output, and
translation of code into a visible pattern. 3D Printers are machines that produce physical 3D
models from digital data by printing layer by layer. It can make physical models of objects either
designed with a CAD program or scanned with a 3D Scanner. It is used in a variety of industries
including jwellary, footwear, industrial design, architecture, engineering and construction,
automotive, aerospace, dental and medical industries, education and consumer products. Since
the advent of mass production in the early 20th century, consumers’ demands have been met by
producing large numbers of goods in significantly less time than ever before. While production
time and price decreased, they did so at the expense of customization. AM makes it possible to
offer customers options to personalize the products and goods they are purchasing, from custom-
made prosthetics to a personalized smart phone case. The importance of customization cannot be
understated. Researchers agree that customization will continue to grow as a major trend across
industries. J.P.Ownder, vice president and principal analyst for infrastructure and operations
professionals for Forrester, says that while “mass customization has long been the next big thing
in product strategy … changes in customer-facing technology are opening up new opportunities
for product strategists to bring customers into product design, reacting both customer loyalty and
INFLUENCE December 2015
Department of Mechanical Engineering Page | 27
higher margins” (Forrester, 2011, p. 12). Marina Wall of the Heinz Nixdorf Institute at the
University of Paderborn also contends that, “individuality or mass customization are important
trends driving change so increased product diversity is important for the future and for meeting
individual customer requirements. AM has great potential for freedom of design that can cope
with these challenges” (as cited in AM Platform, 2013, p. 29). 3D printing is expected to play a
significant role in the future of mass customization. This report explores the potential impact
that this technology may have in various sectors. Through secondary research and conversations
with business analysts, investors, members of the 3D printing community, experts and
entrepreneurs, we investigated some of the potential market opportunities the technology is
unveiling. We also explore sources of capital and nascent business models for those innovators
interested in capitalizing on this technology.
HISTORY OF 3D PRINTING
The technology for printing physical 3D objects from digital data was first developed by
Charles Hull in 1984. He named the technique as Stereo lithography and obtained a patent for
the technique in 1986. While Stereo lithography systems had become popular by the end of
1980s, other similar technologies such as Fused Deposition Modeling (FDM) and Selective
Laser Sintering (SLS) were introduced. In 1993, Massachusetts Institute of Technology (MIT)
patented another technology, named "3 Dimensional Printing techniques", which is similar to the
inkjet technology used in 2D Printers. In 1996, three major products, "Genisys" from Stratasys,
"Actua 2100" from 3D Systems and "Z402" from Z Corporation, were introduced. In 2005, Z
Corp. launched a breakthrough product, named Spectrum Z510, which was the first high
definition color 3D Printer in the market. Another breakthrough in 3D Printing occurred in 2006
with the initiation of an open source project, named Reprap, which was aimed at developing a
self replicating 3D printer. In 2007 the year that did mark the turning point for accessible 3D
printing technology — even though few realized it at the time — as the RepRap phenomenon
took root. Dr Bowyer conceived the RepRap concept of an open source, self-replicating 3D
printer as early as 2004, and the seed was germinated in the following years with some heavy
slog from his team at Bath, most notably Vik Olive and Rhys Jones, who developed the concept
through to working prototypes of a 3D printer using the deposition process. 2007 was the year
the shoots started to show through and this embryonic, open source 3D printing movement
started to gain visibility. But it wasn’t until January 2009 that the first commercially available
3D printer – in kit form and based on the RepRap concept – was offered for sale. This was the
BfBRapMan 3D printer. Closely followed by Makerbot Industries in April the same year, the
founders of which were heavily involved in the development of Rep Rap until they departed
from the Open Source philosophy following extensive investment. Since 2009, a host of similar
INFLUENCE December 2015
Department of Mechanical Engineering Page | 28
deposition printers have emerged with marginal unique selling points (USPs) and they
continue to do so. The interesting dichotomy here is hat, while the RepRap phenomenon has
given rise to a whole new sector of commercial, entry-level 3D. printers, the ethos of the
RepRap community is all about Open Source developments for 3D printing and keeping
commercialization at bay ! 2012 was the year that alternative 3D printing processes were
introduced atthe entry level of the market. The B9Creator (utilising DLP technology) came first
in June, followed by the Form (utilisingstereolithograph) in December. Both were launched via
the funding site Kickstarter — and both enjoyed huge success.! As a result of the market
divergence, significant advances at the industrial level with capabilities and applications,
dramatic increase in awareness and uptake across a growing maker movement, 2012 was also
the year that many different mainstream media channels picked up on the technology. 2013 was
a year of significant growth and consolidation. One of the most notable moves was the
acquisition of Makerbot by Stratasys. Heralded as the 2nd, 3rd and, sometimes even, 4th
Industrial Revolution by some, what cannot be denied is the impact that 3D printing is having on
the industrial sector and the huge potential that 3D printing is demonstrating for the future of
consumers. What shape that potential will take is still unfolding before us.
The fabrication freedom offered by 3D printing techniques, such as stereolithography
and fused deposition modeling have recently been explored in the context of 3D electronics
integration_ referred to as 3D structural electronics or 3D printed electronics. Enhanced 3D
printing may eventually be employed to manufacture end-use parts and thus offer unit-level
customization with local manufacturing; however, until the materials and dimensional
accuracies improve (an eventuality), 3D printing technologies can be employed to reduce
development times by providing advanced geometrically appropriate electronic prototypes. This
paper describes the development process used to design a novelty six-sided gaming die. The die
includes a microprocessor and accelerometer, which together detect motion and upon halting,
identify the top surface through gravity and illuminate light-emitting diodes for a striking effect.
By applying 3D printing of structural electronics to expedite prototyping, the development cycle
was reduced from weeks to hours.
According to Henrik J. Nyman and PeterSarlin, A lot of attention in supply chain
management has been devoted to understanding customer requirements. What are customer
priorities in terms of price and service level, and how can companies go about fulfilling these
requirements in an optimalway?
New manufacturing technology in the form of 3Dprinting is about to change some of the
underlying assumptions for different supply chain set-ups. This paper explores opportunities and
INFLUENCE December 2015
Department of Mechanical Engineering Page | 29
arriers of 3D printing technology, specifically in a supply chain context. We are proposing a set
of principles that can act to bridge existing research on different supply chain strategies and 3D
printing. With these principles, researchers and practitioners alike can better understand the
opportunities and limitations of 3D printing in a supply chain management context.
GauravTyagi explained in his research paper of 3D printing that introduction and history of 3D
printing. I is keen to described about manufacturing a model with the 3d printer and flow chart
of 3D printing technology. Current 3D Printing Technologies such as Stereo lithography, Fused
deposition modelling, Selective laser sintering (SLS), Multi-jet modelling(MJM) are also
described in this paper. The Nano factory 3D printing technologies are introduced that are
related to the nanotechnologies.Also hi explained applications of 3D printing over wide areas
and advantages.
II. 3D PRINTING TECHNOLOGY
The starting point for any 3D printing process is a 3D digital model, which can be created
using a variety of 3D software programmes — in industry this is 3D CAD, for Makers and
Consumers there are simpler, more accessible programmes available — or scanned with a 3D
scanner. The model is then ‘sliced’ into layers, thereby converting the design into a file readable
by the 3D printer. The material processed by the 3D printer is then layered according to the
design and the process. As stated, there are a number of different types of 3D printing
technologies, which process different materials in different ways to create the final object.
Functional plastics, metals, ceramics and sand are, now, all routinely used for industrial
prototyping and production applications. Research is also being conducted for 3D printing bio
materials and different types of food. Generally speaking though, at the entry level of the
market, materials are much more limited. Plastic is currently the only widely used material —
usually ABS or PLA, but there are a growing number of alternatives, including Nylon. There is
also a growing number of entry level machines that have been adapted for foodstuffs, such as
sugar and chocolate.
III. METHODS OF 3D PRINTING
A. Stereo lithography
Stereo lithography (SL) is widely recognized as the first 3D printing process; it was
certainly the first to be commercialized. SL is a laser-based process that works with
photopolymer resins, that react with the laser and cure to form a solid in a very precise way to
produce very accurate parts. It is a complex process, but simply put, the photopolymer resin is
held in a vat with a movable platform inside. A laser beam is directed in the X-Y axes across the
INFLUENCE December 2015
Department of Mechanical Engineering Page | 30
surface of the resin according to the 3D data supplied to the machine (the .stl file), whereby the
resin hardens precisely where the laser hits the surface. Once the layer is completed, the
platform within the vat drops down by a fraction (in the Z axis) and the subsequent layer is
traced out by the laser. This continues until the entire object is completed and the platform can
be raised out of the vat for removal. Because of the nature of the SL process, it requires support
Fig: Stereo lithography
structures for some parts, specifically those with overhangs or undercuts. These structures need
to be manually removed. In terms of other post processing steps, many objects 3D printed using
SL need to be cleaned and cured. Curing involves subjecting the part to intense light in an oven-
like machine to fully harden the resin. Stereolithography is generally accepted as being one of
the most accurate 3D printing processes with excellent surface finish. However limiting factors
include the post-processing steps required and the stability of the materials over time, which can
become more brittle.
B. Digital Light Processing
DLP — or digital light processing — is a similar process to stereolithography in that it is
a 3D printing process that works with photopolymers. The major difference is the light source.
DLP uses a more conventional light source, such as an arc lamp, with a liquid crystal display
panel or a deformable mirror device (DMD), which is applied to the entire surface of the vat of
photopolymer resin in a single pass, generally making it faster than SL. Also like SL, DLP
produces highly accurate parts with excellent resolution, but its similarities also include the
same requirements for support structures and post-curing. However, one advantage of DLP over
SL is that only a shallow vat of resin is required to facilitate the process, which generally results
in less waste and lower running costs.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 31
Fig. Digital Light Processing
C. Laser Sintering / Laser Melting
Laser sintering and laser melting are interchangeable terms that refer to a laser based 3D
printing process that works with powdered materials. The laser is traced across a powder bed of
tightly compacted powdered material, according to the 3D data fed to the machine, in the X-Y
axes. As the laser interacts with the surface of the powdered material it sinters, or fuses, the
particles to each other forming a solid. As each layer is completed the powder bed drops
incrementally and a roller smoothes the powder over the surface of the bed prior to the next
pass of the laser for the subsequent layer to be formed and fused with the previous layer. The
build chamber is completely sealed as it is necessary to maintain a precise temperature during
the process specific to the melting point of the powdered material of choice. Once finished, the
entire powder bed is removed from the machine and the excess powder can be removed to leave
the ‘printed’ parts. One of the key advantages of this process is that the powder bed serves as an
in-process support structure for overhangs and undercuts, and therefore complex shapes that
could not be manufactured in any other way are possible with this process. However, on the
downside, because of the high temperatures required for laser sintering, cooling times can be
considerable. Furthermore, porosity has been an historical issue with this process, and while
there have been significant improvements towards fully dense parts, some applications still
necessitate infiltration with another material to improve mechanical characteristics. Laser
sintering can process plastic and metal materials, although metal sintering does require a much
higher powered laser and higher in-process temperatures. Parts produced with this process are
much stronger than with SL or DLP, although generally the surface finish and accuracy is not as
good.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 32
Fig. Laser Sintering / Laser Melting
A. Extrusion / FDM / FFF
3D printing utilizing the extrusion of thermoplastic material is easily the most common —
and recognizable — 3DP process. The most popular name for the process is Fused Deposition
Modelling (FDM), due to its longevity, however this is a trade name, registered by Stratasys, the
company that originally developed it Stratasys’ FDM technology has been around since the early
1990’s and today is an industrial grade 3D printing process. However, the proliferation of entry-
level 3D printers that have emerged since 2009 largely utilize a similar process, generally
referred to as Freeform Fabrication (FFF), but in a more basic form due to patents still held by
Stratasys. The earliest RepRap machines and all subsequent evolutions — open source and
commercial —employ extrusion methodology. However, following Stratasys’ patent
infringement filing against Afinia there is a question mark over how the entry-level end of the
market will develop now, with all of the machines potentially in Stratasys’ firing line for patent
infringements. The process works by melting plastic filament that is deposited, via a heated
extruder, a layer at a time, onto a build platform according to the 3D data supplied to the printer.
Each layer hardens as it is deposited and bonds to the previous layer. Stratasys has developed a
range of proprietary industrial grade materials for its FDM process that are suitable for some
production applications. At the entry-level end of the market, materials are more limited, but the
range is growing. The most common materials for entry-level FFF 3D printers are ABS and
PLA.The FDM/FFF processes require support structures for any applications with overhanging
geometries. For FDM, this entails a second, water-soluble material, which allows support
structures to be relatively easily washed away, once the print is complete. Alternatively,
breakaway support materials are also possible, which can be removed by manually snapping
them off the part. Support structures, or lack thereof, have generally been a limitation of the
entry level FFF 3D printers. However, as the systems have evolved and improved to incorporate
dual extrusion heads, it has become less of an issue. In terms of models produced, the FDM
process from Stratasys is an accurate and reliable process that is relatively office/studio friendly,
although extensive post-processing can be required. At the entry-level, as would be expected,
the FFF process produces much less accurate models, but things are constantly improving. The
INFLUENCE December 2015
Department of Mechanical Engineering Page | 33
process can be slow for some part geometries and layer-tolayer adhesion can be a problem,
resulting in parts that are not watertight. Again, post-processing using Acetone can resolve these
issues.
Fig. Extrusion / FDM / FFF
There are two 3D printing process that utilize a jetting technique. Binder jetting: where the
material being jetted is a binder, and is selectively sprayed into a powder bed of the part material
to fuse it a layer at a time to create/print the required part. As is the case with other powder bed
systems, once a layer is completed, the powder bed drops incrementally and a roller or blade
smoothes the powder over the surface of the bed, prior to the next pass of the jet heads, with the
binder for the subsequent layer to be formed and fused with the previous layer. Advantages of
this process, like with SLS, include the fact that the need for supports is negated because the
powder bed itself provides this functionality. Furthermore, a range of different materials can be
used, including ceramics and food. A further distinctive advantage of the process is the ability to
easily add a full colour palette which can be added to the binder.
Fig. Inkjet: Binder Jetting
The parts resulting directly from the machine, however, are not as strong as with the sintering
process and require post-processing to ensure durability.
D. Inkjet Material jetting
A 3D printing process whereby the actual build materials (in liquid or molten state) are
selectively jetted through multiple jet heads (with others simultaneously jetting support
INFLUENCE December 2015
Department of Mechanical Engineering Page | 34
materials). However, the materials tend to be liquid photopolymers, which are cured with a pass
of UV light as each layer is deposited. The nature of this product allows for the simultaneous
deposition of a range of materials, which means that a single part can be produced from multiple
materials with different characteristics and properties. Material jetting is a very precise 3D
printing method, producing accurate parts with a very smooth finish
.
Fig. Inkjet Material jetting
E. Selective deposition method
SDL is a proprietary 3D printing process developed andmanufactured by Mcor
Technologies. There is a temptation to compare this process with the Laminated Object
Manufacturing (LOM) process developed by Helisys in the 1990’s due to similarities in layering
and shaping paper to form the final part. However, that is where any similarity ends. The SDL
3D printing process builds parts layer by layer using standard copier paper. Each new layer is
fixed to the previous layer using an adhesive, which is applied selectiv0ely according to the 3D
data supplied to the machine. This means that a much higher density of adhesive is deposited in
the area that will become the part, and a much lower density of adhesive is applied in the
surrounding area that will serve as the support, ensuring relatively easy “weeding,” or support
removal. After a new sheet of paper is fed into the 3D printer from the paper feed mechanism
and placed on top of the selectively applied adhesive on the previous layer, the build plate is
moved up to a heat plate and pressure is applied. This pressure ensures a positive bond between
the two sheets of paper. The build plate then returns to the build height where an adjustable
Tungsten carbide blade cuts one sheet of paper at a time, tracing the object outline to create the
edges of the part. When this cutting sequence is complete, the 3D printer deposits the next layer
of adhesive and so on until the part is complete.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 35
Fig. Selective deposition method
F. Bio Materials:
There is a huge amount of research being conducted into the potential of 3D printing bio
materials for a host of medical (and other) applications. Living tissue is being investigated at a
number of leading institutions with a view to developing applications that include printing
human organs for transplant, as well as external tissues for replacement body parts. Other
research in this area is focused on developing food stuffs —meat being the prime example.
G) Other:
And finally, one company that does have a unique (proprietary) material offering is
Stratasys, with its digital materials for the Objet Connex 3D printing platform. This offering
means that standard Objet 3D printing materials can be combined during the printing process —
in various and specified concentrations — to form new materials with the required properties.
References
1. Greenberg, Andy (May 5, 2013). "Meet The 'Liberator': Test-Firing The World's First Fully
3D-Printed Gun". Forbes. Retrieved May 7, 2013.
2. 5 Different 3D Printed Gun Models Have Been Fired Since May, 2013 – Here They Are, 3D
Print, September 10, 2014. ( archive )
3. First metal 3D printed gun is capable of firing 50 shots, The Guardian, November 8, 2013.
(archive)
4. World’s First 3D Printed Metal Gun Successfully Fires 600+ Rounds, CNS News, November
13, 2013. (archive)
5. Solid Concepts 3D-printed 1911 gets version 2.0, Guns.com, November 20, 2013. (archive)
6. Ewer, Marty. "Shockwave Technologies Raptor Grip Test Parts Molded and Approved!".
7. Ewer, Marty. "The 14" 12-Gauge That Doesn't Require a Tax Stamp".
INFLUENCE December 2015
Department of Mechanical Engineering Page | 36
8. 6, Dutchman. "Bees Living in Their Heads". Uncle, Say."Another NFA Hack".
9. Johnson, Steve. "Grip Raptor Grip (Birdshead ) Grip for
Mossberg500"www.explainingthefuture.com/3dprinting.html
10. http://en.wikipedia.org/wiki/3D_printing
11. http://www.mahalo.com/3d-printers/
12. http://net.educause.edu/ir/library/pdf/DEC0702.pdf
13. http://www.inventioncity.com/intro-to-3-d-printing.html
INFLUENCE December 2015
Department of Mechanical Engineering Page | 37
9.9.9.9. SAP: Bright Opportunity for CarrierSAP: Bright Opportunity for CarrierSAP: Bright Opportunity for CarrierSAP: Bright Opportunity for Carrier
By - M.A.Sutar
SAP stands for Systems, Applications and Products in Data Processing. Five German
Engineers founded it in 1972. The original name for SAP was German: Systeme, Anwendungen,
Produkte, German for "Systems Applications and Products." The original SAP idea was to
provide customers with the ability to interact with a common corporate database for a
comprehensive range of applications. Gradually, the applications have been assembled and
today many corporations, including IBM and, are using SAP products to run their own
businesses. The inspection lot functionality allows an inspection of a product in the warehouse.
The product can be a finished product, a raw material, or a piece of equipment that is used in the
facility. When an inspection is performed the results of the inspection should be recorded for
each of the inspection characteristics. The inspection lot can be accepted as being within
tolerance or can be rejected if the inspection finds that the results do not reach the prescribed
specification for a certain characteristic.
When the inspection is complete for the inspection lot, a usage decision can be made as
to whether the material can be accepted or rejected. After the quality department has made a
usage decision the inspection is technically closed. SAP applications, built around their latest
system, provide the capability to manage financial, asset, and cost accounting, production
operations and materials, personnel, plants, and archived documents. The R/3 system runs on a
number of platforms including and uses the client model. The latest version of R/3 includes a
comprehensive Internet-enabled package.
SAP is ERP software which large organizations use to manage their business. SAP has
several modules, each of which represents a business- process. Modules are usually abbreviated
for the business process they represent. For instance, HCM is Human Capital Management, FI
for Financial Accounting, MM for Materials Management and SD is Sales & Distribution and so
on. All together there are some nineteen core modules. These modules are highly integrated in
real-time, which means, that if information is shared between modules then the data is entered
only once. This reduces the chances of error arising from repetitive entry and also reduces the
man-hours. Managers and decision makers always have information at their fingertips and this
helps them in effective decision making.
SAP has been around for nearly four decades. Nine out of ten Fortune-500 companies
have already implemented SAP (not counting the thousands of to-be Fortune-500 companies
that have SAP). There are well over 10 million SAP users worldwide and jobs keep popping up
all around the world. SAP is the leading ERP (Enterprise Resource Planning) software. Because
INFLUENCE December 2015
Department of Mechanical Engineering Page | 38
of its liberal open-architecture, there are millions of programmers working around the world to
provide interaction between thousands of major software and SAP.
SAP is usually implemented in phases. The first phase is when organizational structure
and accounting components are configured, tested and then taken live. Gradually more modules
are turned on. SAP has recently recast its product offerings under a comprehensive Web
interface, called mySAP.com, and added new e-business applications, including customer
relationship management (CRM) and supply chain management (SCM).
As of January 2007, SAP, a publicly traded company, had over 38,4000 employees in
over 50 countries, and more than 36,200 customers around the world. SAP is turning its
attention to small- and-medium sized businesses (SMB). A recent R/3 version was provided for
IBM's AS/400 platform. SAP is at the center of today’s technology revolution. The market
leader in enterprise application software, SAP helps organizations fight the damaging effects of
complexity, generate new opportunities for innovation and growth, and stay ahead of the
competition.
SAP (system applications and products in data processing), is a package which allows
more:
1. Flexibility.
2. Customized solutions to suit your business.
3. Highly integrated with other modules.
4. Industry specific modules with a deep insight.
5. Continuous support.
SAP PROCESS.
1) Quality Management;
Quality Management is an integral part of the logistics function and within the SAP system it is
fully integrated with complementary components including Materials Management (MM), Plant
Maintenance (PM), and Production Planning (PP). Quality management is important to the
warehouse, inspecting incoming material as it arrives at the facility and for manufacturing
operations, where the quality of in-process items are checked during manufacturing process and
finished goods are inspected before they reach the warehouse.
2) Quality Management Components
INFLUENCE December 2015
Department of Mechanical Engineering Page | 39
The QM module covers three distinct areas of planning, notifications and inspections. The
quality planning function allows your quality department to plan inspections for goods receipts
from vendors and production, work in process and stock transfers. A quality notification can be
used to request action to be taken by the quality department. This may be to review an internal
problem, an issue with items from a vendor or a customer complaint (Financial standard
newspaper of March 22, 2004). The quality inspection is the physical inspection using
specifications defined in quality planning.
3) Planning
In SAP the quality inspection plans defines how an item is to be inspected. The plan also
establishes how the inspection is to take place, the item characteristics to be inspected and all the
required test equipment that is needed for the inspection.
The inspection plan is an important part of the QM planning process. The plan defines which
characteristics of the item are to be inspected in each operation and what kind of test equipment
is required for the inspection.
4) Notifications
The quality notification records a problem that is either identified by a customer against a
product that is produced by your company, or by your company against the product of a vendor.
A notification can also be raised internally to report a quality issue that has arisen on the
production line or somewhere at the facility. You can assign a quality notification to an existing
QM order to create a new order for the specific notification.
5) Inspections
A quality inspection occurs when someone in the quality department inspects an item as
determined by the inspection planning functionality. An inspection is based on one or more
inspection lots, where a lot is a request to inspect a specific item. Inspection lots can be created
manually by a user or automatically by the SAP system. There are a number of events that can
trigger an automatic inspection lot. Most inspection lots are automatically triggered by a
movement of materials, such as a goods receipt or a goods issue. But other events like the
creation or release of a production order, the creation of deliveries or a transfer of stock in the
warehouse.
The inspection lot functionality allows an inspection of a product in the warehouse. The product
can be a finished product, a raw material, or a piece of equipment that is used in the facility.
When an inspection is performed the results of the inspection should be recorded for each of the
inspection characteristics. The inspection lot can be accepted as being within tolerance or can be
INFLUENCE December 2015
Department of Mechanical Engineering Page | 40
rejected if the inspection finds that the results do not reach the prescribed specification for a
certain characteristic. When the inspection is complete for the inspection lot, a usage decision
can be made as to whether the material can be accepted or rejected. After the quality department
has made a usage decision the inspection is technically closed.
ADVANTAGES OF SAP ERP
1. Reduction in sales order processing costs.
2. Reduced time to calculate selling price.
3. Increased Cash Flow (one-time event).
4. Reducing the number of days sales outstanding by freeing-up capital (I.e., reduced working
capital; free cash flow).
5. Increased revenues due to less stock outages (reduction in lost sales).
6. Reduction in distribution costs.
DISADVANTAGE
1) High costs.
2) Buggy software
3) Over complex
4) Lack of security
APPLICATION OF SAP
1) Aerospace and defense
2) Automotive industry Banking
3) Health Care
4) Logistics
5) Mining
References
1. System application and programming manual.
2. www.learnSAP.com(ver.ECC 6.0)
3. IJMIE Volume 2, issue 9, September 2012
4. IJIRSET, Volume 2, Issue 6, 2013
5. Case studies: JOHN Cotton Group.
6. Financial standard newspaper of March 22, 2004, page 18
Department of Mechanical Engineering
10.10.10.10. Nitinol (NiTi) Nitinol (NiTi) Nitinol (NiTi) Nitinol (NiTi)
Shape memory alloys (SMAs) have enormous potential for a wide variety of
applications. A large body of work exists on the characterization of the microstructure and
stress-strain behaviour of these alloys, Nitinol (NiTi
memory refers to the special ability of certain materials to remember shape, usually induced
thermally and also initiated mechanically. This effect, in general, allows the material to recover
its initial shape after plastic deformation.
recovered upon unloading/heating without failure, compared to 1% in case of conventional
materials. Shape recovery is done by heating the SMA material above a transformation
temperature predetermined by the materia
transformation temperatures above their operating temperature (often room temperature) exhibit
the Shape memory effect (SME). SMAs with transformation temperatures below operating
temperature exhibit the pseu
superelastic state are capable of recovering their previous
relatively high applied strains. This differs from the SME in that the deformation from an
applied load is not plastic and shape recovery is achieved isothermally on unloading. Fig. 1
shows a schematic representation of the stress
deformation material behavior
Transformation Mechanism :
The two unique properties described above are made possible through a solid state phase
change that is a molecular rearrangement, which occurs in the SMA. During solid state phase
INFLUENCE December 2015
Department of Mechanical Engineering
Nitinol (NiTi) Nitinol (NiTi) Nitinol (NiTi) Nitinol (NiTi) ---- Shape Memory AlloysShape Memory AlloysShape Memory AlloysShape Memory Alloys
Shape memory alloys (SMAs) have enormous potential for a wide variety of
applications. A large body of work exists on the characterization of the microstructure and
strain behaviour of these alloys, Nitinol (NiTi- Nickel Titanium) in
memory refers to the special ability of certain materials to remember shape, usually induced
thermally and also initiated mechanically. This effect, in general, allows the material to recover
its initial shape after plastic deformation. The SMA can be strained upto 8% and can be
recovered upon unloading/heating without failure, compared to 1% in case of conventional
. Shape recovery is done by heating the SMA material above a transformation
temperature predetermined by the material composition and processing history. SMAs with
transformation temperatures above their operating temperature (often room temperature) exhibit
the Shape memory effect (SME). SMAs with transformation temperatures below operating
temperature exhibit the pseudoelasticity also called as superelasticity (SE). SMAs in the
superelastic state are capable of recovering their previous shape after the removal of even
relatively high applied strains. This differs from the SME in that the deformation from an
d is not plastic and shape recovery is achieved isothermally on unloading. Fig. 1
shows a schematic representation of the stress-strain response during SME, SE, and plastic
behavior due to different testing temperatures.
Fig. 1. Stress-strain Response
Transformation Mechanism :-
The two unique properties described above are made possible through a solid state phase
change that is a molecular rearrangement, which occurs in the SMA. During solid state phase
INFLUENCE December 2015
Department of Mechanical Engineering Page | 41
Shape Memory AlloysShape Memory AlloysShape Memory AlloysShape Memory Alloys
By - K.I.Nargatti
Shape memory alloys (SMAs) have enormous potential for a wide variety of
applications. A large body of work exists on the characterization of the microstructure and
Nickel Titanium) in particular. Shape
memory refers to the special ability of certain materials to remember shape, usually induced
thermally and also initiated mechanically. This effect, in general, allows the material to recover
he SMA can be strained upto 8% and can be
recovered upon unloading/heating without failure, compared to 1% in case of conventional
. Shape recovery is done by heating the SMA material above a transformation
l composition and processing history. SMAs with
transformation temperatures above their operating temperature (often room temperature) exhibit
the Shape memory effect (SME). SMAs with transformation temperatures below operating
doelasticity also called as superelasticity (SE). SMAs in the
shape after the removal of even
relatively high applied strains. This differs from the SME in that the deformation from an
d is not plastic and shape recovery is achieved isothermally on unloading. Fig. 1
strain response during SME, SE, and plastic
The two unique properties described above are made possible through a solid state phase
change that is a molecular rearrangement, which occurs in the SMA. During solid state phase
INFLUENCE December 2015
Department of Mechanical Engineering Page | 42
change molecules remain closely packed so that the substance remains a solid. In most SMAs, a
temperature change of only about 10°C is necessary to initiate this phase change. The two
phases, which occur in SMAs, are martensite and austenite. Martensite is the soft and exists at
lower temperatures. The molecular structure in this phase is twinned as shown in Fig. 2(b).
Upon deformation this phase takes on the second form shown in Fig. 2(c). Austenite, the
stronger phase of SMAs, occurs at higher temperatures. The shape of the austenite structure is
cubic, as shown in Fig. 2(a). The austenite to martensite phase transformations of the alloy can
be characterised by four transformation temp. The temperatures at which phases begin and finish
are represented by the variables: Ms, Mf, As, Af.
Properties of NiTiSMA :-
• Melting Point :- 1240~1310 ⁰C • Density :- 6.4~6.5 gm/cc
• Electrical Resistivity :- 0.5~1.1 x 10-6Ωm • Elongation to Failure :- 25~40%
• Thermal Conductivity :- 8~18 W/(m.K) • Coeff. of thermal Exp :- 6~11(10-6
)/K
• Young’s Modulus 30 - 75 GPa • yield strength :- 150~600 MPa
• Ultimate Tensile Strength:- 800-1500 MPa • Fatigue Strength N=106
:-350 MPa
• Transformation Temp :- -100~150 ⁰C • Max. Recovery Stress :- 600~800 MPa
Applications of Shape Memory Alloy :-
Super elastic Devices
NiTi super elastic devices are used for applications which demand the extraordinary flexibility
and torque ability of NiTi. NiTi has the ability to absorb large amounts of strain energy and
release it as the applied strain is removed Examples Include:
• Damping Devices
• Eyeglass Frames
• Cellular Telephone Antennas
• Medical Guidewires
• Surgical Localization Hooks
• Bone Suture Anchors
• Endodontic (Root Canal) Files
• Orthodontic Arches
Shape Memory Actuation Devices
Shape Memory Actuation Devices utilize the shape memory effect to recover a particular shape
upon heating above their transformation temperatures. Common actuation temperatures are
INFLUENCE December 2015
Department of Mechanical Engineering Page | 43
human body temperature and boiling water temperature. Examples of shape memory actuation
devices include:
• Aerospace Actuators
• Robotics
• Satellite Release Bolts
• Vascular Stents
• Pipe Couplings
• Electrical Connectors
References :-
1) Otsuka, K., C. M. Wayman, “Shape Memory Materials”, Cambridge University Press, New
York, NY, 1998.
2) Kauffman, G. B., and Mayo, I., “The Story of Nitinol: The Serendipitous Discovery of the
Memory Metal and Its Applications.” The Chemical Educator, vol. 2, (1996).
3) Otsuka, K.; Ren, X. B., Recent Developments in the Research of Shape Memory Alloys.
Intermetallics (1999), 7 (5), 511-528.
ENERGY AND POWER
INFLUENCE December 2015
Department of Mechanical Engineering Page | 44
11.11.11.11. Phase Change MaterialsPhase Change MaterialsPhase Change MaterialsPhase Change Materials
By – P.D.Kulkarni
Abstract:-
Thermal storage is typically used to level the load from an intermittent or cyclic heat
source. In phase change thermal storage, as the name implies, heat is absorbed by a melting
solid or phase change material (PCM) and released when the solid refreezes, taking advantage
of the heat of fusion to increase the storage capacity. A module which can store heat at a rapid
rate and release it slowly can reduce the size and weight of the equipment used for heat
rejection to that required to handle the average load. Phase change thermal storage provides
one method of achieving this objective. Rapid thermal response requires a high thermal
diffusivity and a small thickness in the direction of heat flow. An ideal storage medium will have
a high heat of fusion, a high thermal conductivity, and a melting point low enough to provide an
adequate temperature difference between the material to be cooled and the melting temperature.
Too low a melting temperature can increase the difficulty of refreezing the medium. The
water/ice transition provides one of the largest heats of fusion available but has a melting
temperature too low for many applications. Pure paraffin waxes have about 2/3 the heat of
fusion of water but can provide melting points of different temperatures depending on the
number of carbon atoms in the molecule. Wax also has the additional advantage of being
chemically stable, noncorrosive, and nontoxic. Some salts lose and gain water of hydration with
a significant corresponding energy change in the temperature range of interest.
Keywords: - Thermal Energy storage, Phase change materials, Latent heat storage, Solar Water
Heating System.
Introduction:-
The thermal energy absorbed by a material when changing its phase at a constant
temperature is called ‘latent heat’. For practical applications, materials that exhibit low volume
changes are used, for example, solid-to-liquid and some special solid-to-solid phase change
materials are applicable.
The commonly used phase change materials for technical applications are: paraffins
(organic), salt hydrates (inorganic) and fatty acids (organic). For cooling applications, it is also
possible to use ice storage. Latent heat storage offers a significant advantage if the application
needs temperature cycles closely around the melting point, since in those cases the
INFLUENCE December 2015
Department of Mechanical Engineering Page | 45
corresponding storage density of water is small. In liquid or solid state the specific heat capacity
is lower for most PCM materials compared to water.
Possible Applications in Solar Buildings for PCM are
• Cold storage for solar assisted cooling applications (PCT around 5 -18°C)
• PCM (Micro-Capsules) incorporated in wall material (PCT around 22°C)
• Heating storage for Solar Energy and longer running time of boilers. (PCT around 60°C)
• Hot storage for Solar Assisted Air Conditioning (PCT around 80°C)
Techniques to Transfer Heat from Source to PCM
In all cases, heat must be transferred between the phase change material and the fluid
cycle (charging, discharging). Different techniques are used, including:
• Direct contact between phase change material and heat transfer fluid: this needs materials that
are chemically stable for long periods of direct contact and the solidification of PCM occur in
small particles, securing sufficient heat transfer during subsequent melting.
• Macroscopic-capsules: this is the most frequently used encapsulation method. The most
common approach is to use a plastic module, which is chemically neutral with respect to both
the phase change material and the heat transfer fluid. The modules typically have a diameter of
some centimeters.
• Micro-encapsulation: this is a relatively new technique in which the
PCM is encapsulated in a small shell of polymer materials with a diameter of some micrometers
(in the moment only for paraffins). A large heat-exchange surface results and the powder- like
spheres can be integrated into many construction materials or used as aqueous pumpable slurry.
Plasters incorporating micro-encapsulated PCM are on the market since 2004. PC slurries are
still under development [1].
By all the above mentioned techniques heat can be transferred from HTF to PCM but due
to low conductivity some modifications are necessary that are later described in this paper.
Main Advantages of Phase Change Storage in Comparison to Conventional Water Storage
Techniques
• Higher thermal energy storage capacity (smaller storages) than sensible energy storage, at least
if only small useful temperature differences can be achieved.
• Relatively constant temperature during charging and discharging.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 46
• Burner cycles for the back-up heat generation unit and therefore their CO and HC emissions
can be reduced.
Main Disadvantages of Phase Change Storage
• Higher investment cost, in most cases, compared to water storage.
• In many cases, the peak power during discharge is limited due to limited heat conduction in the
solid state of PCM. This is the main limit determining the acceptable size for the storage
modules.
• A limited experience with long-term operation (after many thousand cycles).
• Risks of loss of stability of the solution and deterioration of the encapsulation material.
Classification of Phase Change Materials:-
Initially, the solid–liquid PCMs behave like sensible heat storage (SHS) materials; their
temperature rises as they absorb heat. Unlike conventional SHS, however, when PCMs reach the
temperature at which they change phase (their melting temperature) they absorb large amounts
of heat at an almost constant temperature. The PCM continues to absorb heat without a
significant rise in temperature until all the material is transformed to the liquid phase. When the
ambient temperature around a liquid material falls, the PCM solidifies, releasing its stored latent
heat. A large number of PCMs are available in any required temperature range from −5 up to
190 °C. Within the human comfort range between 20–30 °C, some PCMs are very effective.
They store 5 to 14 times more heat per unit volume than conventional storage materials such as
water, masonry or rock.
Organic Phase Change Materials:-
An organic phase change material (PCM) possesses the ability to absorb and release
large quantity of latent heat during a phase change process over a certain temperature range. The
use of PCMs in energy storage and thermal insulation has been tested scientifically and
industrially in many applications. The broad based research and development studies
concentrating on the characteristics of known organic PCMs and new materials as PCM
candidates, the storage methods of PCMs, as well as the resolution of specific phase change
problems, such as low thermal conductivity and supercooling have been reviewed.
Advantages:-
Freeze without much undercooling
Ability to melt congruently
INFLUENCE December 2015
Department of Mechanical Engineering Page | 47
Self nucleating properties
Compatibility with conventional material of construction
No segregation
Chemically stable
High heat of fusion
Safe and non-reactive
Recyclable
Disadvantages:-
Low thermal conductivity in their solid state. High heat transfer rates are required during
the freezing cycle
Volumetric latent heat storage capacity is low
Flammable. This can be partially alleviated by specialist containment
To obtain reliable phase change points, most manufacturers use technical grade paraffins
which are essentially paraffin mixture and are completely refined of oil, resulting in high
costs
Inorganic Phase change Materials:-
Salt hydrates are the example of inorganic phase change materials. The advantage and
disadvantages of this salt hydrate group are as follows,
Advantages:-
High volumetric latent heat storage capacity
Availability and low cost
Sharp melting point
High thermal conductivity
High heat of fusion
Non-flammable
Disadvantages:-
Change of volume is very high
Super cooling is major problem in solid–liquid transition.
Nucleating agents are needed and they often become inoperative after repeated cycling.
Eutectics:-
The melting point of this group is very low and they are metallic in nature therefore these
components are called as Eutectics.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 48
Advantages:-
Eutectics have sharp melting point similar to pure substance
Volumetric storage density is slightly above organic compounds.
Disadvantages:-
Only limited data is available on thermo-physical properties as the use of these materials
are relatively new to thermal storage application.
Hygroscopic Materials:-
Many natural building materials are hygroscopic, that is they can absorb (water
condenses) and release water (water evaporates). The process is thus:
Condensation (gas to liquid) ∆H<0; enthalpy decreases (exothermic process) gives off
heat.
Vaporization (liquid to gas) ∆H>0; enthalpy increases (endothermic process) absorbs
heat (or cools).
Selection Criteria for Phase Change Materials
Thermal Properties:-
• Phase change temperature fitted to application
• High change of enthalpy near temperature of use
• High thermal conductivity in both liquid and solid phases
Physical Properties:-
• Low density variation
• High density
• Small or no subcooling
Chemical Properties:-
• Stability
• No phase separation
• Compatibility with container materials
• Non toxic, non flammable, non polluting
Economical Properties
• Cheap
• Easily Available.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 49
Storage Capacity of Phase Change Materials
Latent thermal energy storage aims at using the latent heat of fusion of phase change
materials. The relatively large amount of latent heat of the phase transition shall give a higher
energy storage density compared to the traditional sensible heat storage systems. Unfortunately
PCMs have a lower sensible heat capacity than water. Therefore, when a system is operated with
high temperature differences, the advantage of the latent heat is reduced due to the lower
sensible heat. In such applications water storages are more favorable.
For high temperature differences of more than 20ᵒC and especially with low
concentrations of PCM the storage capacity is not much better than for water. Therefore PCM
slurries should potentially be used in systems that are operated with low temperature differences.
Problems Related to Use of Phase Change Materials
Long Term Stability
Insufficient long term stability of the storage materials and containers is a problem that
has limited widespread use of latent heat stores. This poor stability is due to two factors: poor
stability of the materials properties due to thermal cycling, and/or corrosion between the PCM
and the container.
Phase Segregation
The high storage density of salt hydrate materials is difficult to maintain and usually
decreases with cycling. This is because most hydrated salts melt congruently with the formation
of the lower hydrated salt, making the process irreversible and leading to the continuous decline
in their storage efficiency.
Segregation can be prevented changing the properties of the salt hydrate with the
addition of another material that can hinder the heavier phases to sink to the bottom. This can be
achieved either with gelling or with thickening materials. Gelling means adding a cross-linked
material (e.g. polymer) to the salt to create a three dimensional network that holds the salt
hydrate together. Thickening means the addition of a material to the salt hydrate that increases
the viscosity and hereby holds the salt hydrate together [2].
Subcooling
Subcooling is another serious problem associated with all hydrated salts. It appears when a
salt hydrate starts to solidify at a temperature below its congelating temperature. Several
approximations have been studied to solve this problem. One is the use of hydrated salts in
INFLUENCE December 2015
Department of Mechanical Engineering Page | 50
direct contact heat transfer between an immiscible heat transfer fluid and the hydrated salt
solution. Another solution is the use of nucleators.
Low Thermal Conductivity
Among the various organic or inorganic PCMs, normal paraffin is an excellent PCM for
low temperature thermal energy storage due to its large latent heat, good stability, no subcooling
degree and no toxicity. Moreover, various paraffin waxes that both the melting point and the
heat of fusion increase with their increasing chain length, so it’s available in a selection of
melting point ranges to make a good match between melting range and system operating
temperature.
Although paraffin waxes exhibit desirable properties as PCMs, they present a low
thermal conductivity below 0.4 W/(m K). This property increases the thermal resistance of the
growing layer of the molten or solidified medium and thereby may cause the surface heat flux
decreases, reducing the rate of heat storage further in the thermal storage device.
In order to offset the low thermal conductivity of the PCM, the device must be designed
with an adequate heat transfer area and the material must have suitable heat transfer coefficient.
Therefore, the structures of the device have been researched, such as spherical capsule, unfinned
or finned cylinder, which leads to reduced effective energy storage capacity. The low thermal
conductivity of paraffin wax can also be enhanced, such as like inserting fins, metal matrices
etc. [3].
Improvement in Thermal Conductivity of Phase Change Material:-
By using graphite foam composite for thermal energy storage device
In recent years, development of high thermal conductivity, graphite foams
opened a new horizon to thermal storage and thermal management applications. Graphite foams
present advantages as the bulk thermal conductivities varying correspondingly with density from
40 to 150 W/m K, densities ranging from 0.2 to 0.6 g/cm3, and good mechanical properties and
chemical inertness. It’s good compatibility with many surfaces and has a porous structure with a
high porosity, which can reach 95% of the volume consist of void spaces and provides a high
ratio of surface area to volume. So Graphite foam can be filled with various materials. On the
other hand, graphite foam is known to be superior to porous metallic like aluminum, copper or
nickel due to having higher thermal conductivity than conductive metals. The above advantages
make graphite/PCMs composites desirable for latent heat thermal storage applications. Py et al.
[4] proposed a supported PCM made of paraffin impregnated by capillary forces in a graphite
matrix. They found that the thermal conductivity of the composite was equal to that of the sole
INFLUENCE December 2015
Department of Mechanical Engineering Page | 51
porous graphite matrix. Moreover, the composite presented the same anisotropy with respect to
the compression axis. Zhong et al. [5] characterized the thermal performance of paraffin
wax/graphite foam composites using experimental measurements of thermal diffusivity and
latent heat, experimental results indicated noticeable improvements in thermal diffusivity of the
composite compared to that of pure PCM, especially with lower porosity of the foam. A
numerical study has been carried out by Lafdi et al. [6] to investigate the thermal performance of
graphite foam with different porosities saturated with the PCM (paraffin wax). The energy
absorption rate for graphite/PCM composite was compared with that of pure PCM and found a
significant improvement in the energy absorption rate.
Encapsulation of the materials
The utilization of high latent heat storage capability of phase change materials is one of
the keys to an efficient way to store thermal energy. However, some of the limitations of the
existing technology are the high volumetric expansion and low thermal conductivity of phase
change materials (PCMs), low energy density, low operation temperatures and high cost, and
that can be eliminated with encapsulated PCM system, which operates at temperatures above
500°C and takes advantage of the heat transfer modes at such high temperatures to overcome the
aforementioned limitations of PCMs. Encapsulation with sodium silicate coating on preformed
PCM pellets were investigated. A low cost, high temperature metal, carbon steel has been used
as a capsule for PCMs with a melting point above 500° C[7].
Several methods to encapsulate PCM were investigated. The objective here is to develop
an encapsulation or storage system which would:
• Have enough strength to hold the PCM inside during melting and solidification.
Be non-porous to prevent any molten PCM leakage.
• Be stable at high temperatures and continual thermal cycling.
• Be a good thermal conductor to effectively transfer heat from the heat transfer.
• Fluid (HTF) to the PCM. Be non-reactive to the molten PCM.
• Be non-reactive to the HTF.
• Be a low cost material.
• Have low fabrication cost.
In order to develop an encapsulation with the above characteristics, two different
approaches are there. One is to make a PCM pellet of desired shape, either spherical or
cylindrical and then apply a coating or a series of coatings which would act as a shell to hold the
INFLUENCE December 2015
Department of Mechanical Engineering Page | 52
PCM inside. The other approach is to fabricate a shell (cylindrical or spherical) and then fill the
PCM inside it.
References:-
1. Journal on ''Inventory of Phase Change Materials (PCM)'' by Luisa Cabeza , Andreas Heinz
and Wolfgang Streicher- Feb.2005
2. B. Zalba, J.M. Marín, L.F. Cabeza, H. Mehling, ''Review on thermal energy storage with
phase change: materials, heat transfer analysis and applications'', Applied Thermal
Engineering 23 (2003) 251-283.
3. IOP Science Journal on ‘‘PCM/ graphite foam composite for thermal energy storage device''
by C X Guo, X L Ma and L Yang. IOP Conf. Series: Materials Science and Engineering 87
(2015) 012014
4. Py X, Olives R and Mauran S 2001 Int. J Heat Mass Transfer 44 2727-37
5. Zhong Y, Guo Q, Li S, Shi J and Liu L 2010 J. Sol. Energy Mater. Sol. Cells 94 1011-14
6. Lafdi K, Mesalhy O and Elgafy A 2008 J. Carbon 46 159-68
7. IOP Science Journal on ‘‘PCM/ graphite foam composite for thermal energy storage device''
by C X Guo, X L Ma and L Yang. IOP Conf. Series: Materials Science and Engineering 87
(2015) 012014
INFLUENCE December 2015
Department of Mechanical Engineering Page | 53
12.12.12.12. Concentrated Solar Power PlantsConcentrated Solar Power PlantsConcentrated Solar Power PlantsConcentrated Solar Power Plants
By - S.C.Naik
Introduction
Solar energy is the radiant light and heat from the Sun that has been harnessed by humans
since ancient times using a range of ever-evolving technologies. Solar radiation along with the
secondary solar resources such as wind and wave power, hydro and biomass account for most of
the available renewable energy on Earth.
Solar power technologies provide electrical generation by means of heat engines
(Concentrated Solar power plants CSP) or Photovoltaic’s. Solar thermal power plants use
concentrated sunlight to generate electricity. The direct solar irradiation is collected and
concentrated in large mirror fields and is converted into electricity using heat engines, usually
conventional steam turbines. They offer great potential for a future sustainable energy supply,
especially in the part of the world with high direct solar irradiation.
Parabolic Trough, Solar Tower is the most proven large-scale solar power technology
available today; a thermal storage can be integrated into a solar thermal steam plant in order to
prolong operating hours of a CSP plant.
The main limitation of existing solar thermal power plants is intermittent production of
electricity. We can use it only when the sun shines - The production of electricity varies with the
weather and stops at night, which decreases the overall plant performance and the commercial
acceptability. These natural phenomena can be smoothed either by the help of an efficient
storage system or by combining it with fossil energy. Since, the reduction of CO2 emissions is
part of the concept of solar energy use, thermal storage system seems to be the most attractive
solution. Energy storage is an important issue because modern energy systems usually assume
continuous availability of energy.
For the future market potential of Solar Thermal power plants, it is beneficial to integrate
a thermal storage system. The most advanced thermal energy storage for solar thermal power
plants is a two-tank storage system where the heat transfer fluid (HTF) also serves as storage
medium. However, the HTF used in state-of-the-art parabolic trough power plants is expensive,
dramatically increasing the cost of larger HTF storage systems. An engineering study was
carried out to evaluate a concept, where another (less expensive) liquid medium such as molten
salt is utilized as storage medium rather than the HTF itself. [1]
INFLUENCE December 2015
Department of Mechanical Engineering Page | 54
Status of Technology
Concentrating solar power (CSP) plants consist of large field of collectors, a heat transfer
fluid/steam generation system, a Rankine steam turbine/generator cycle, and optional thermal
storage and/or fossil-fired backup systems.
Figure: A Schematic diagram of CSP plant
The Figure A shows an exemplary schematic of CSP. In this case the collector field is
made up of a large field of single-axis-tracking parabolic trough solar collectors. The solar field
is modular in nature and comprises many parallel rows of solar collectors, normally aligned on a
north-south horizontal axis. Each solar collector has a linear parabolic-shaped reflector that
focuses the sun’s direct beam radiation on a linear receiver located at the focus of the parabola.
The collectors track the sun from east to west during the day to ensure that the sun is
continuously focused on the linear receiver. A heat transfer fluid (HTF) is heated up as high as
390ºC as it circulates through the receiver and returns to a steam generator of a conventional
steam cycle power plant. Given sufficient solar input, the plants can operate at full-rated power
using solar energy alone.
To enable these plants to achieve rated electric output during the periods of low solar
radiation overcast or night time, a thermal storage can be integrated into the plant design to
allow solar energy to be stored and dispatched when power is required. There is also a
possibility of backup fossil-fired plant.
In brief, concentrating solar power plants (CSP) produce electric power by converting
direct solar radiation. The process consists of two parts: one that concentrates solar energy and
converts it into medium to high-temperature heat, and another that converts heat energy to
INFLUENCE December 2015
Department of Mechanical Engineering Page | 55
electricity. The conversion of heat to electricity is realized as a conventional power cycle, for
instance through a steam turbine or a Stirling engine.
In case of collectors a significant disadvantage is their decreasing efficiency with
increasing temperature. Thus for high temperature (> 100 °C) applications concentrating
collectors are more favorable. Concentrating collectors are characterized by an aperture area
which is greater than the absorber area.
The concentration factor (i.e. ratio of aperture to absorber area) is greater than one.
Concentration is affected by optical systems (mirrors, lenses), which are interposed between the
source of radiation and the energy absorbing surface. By concentration the energy delivery
temperature can be increased due to reduced heat losses.
The range for CSP plant with using any collector technology is depends on solar field
size, solar collector assembly (SCA), Heat collection element (HCE), HTF medium, Power
block and economical estimations.
A typical storage concept consists of two storage tanks filled with a liquid storage
medium which are on a different temperature level. When charging the storage, the medium is
pumped from the cold to the hot tank being heated up using the collected solar heat. When
discharging the storage, the medium is pumped from the hot to the cold tank extracting the heat
in a steam generator that drives the power cycle.
The thermal storage system consists of the following principal elements: the nitrate salt
inventory, the nitrate salt storage tanks, the oil-to-salt heat exchangers, and the nitrate salt
circulation pumps. [2]
Existing CSP technologies with Thermal energy storage
Of eight installed thermal energy storage systems in solar thermal electric plants, seven
have been of an experimental or prototype nature and one has been a commercial unit.
Table A gives the characteristics of the existing units. All have been sensible heat storage
systems: two single-tank oil thermo cline systems, four single medium two-tank systems (one
with oil and three with salt) and two dual medium single-tank systems.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 56
Table A Existing CSP technologies with Thermal energy storage system in worldwide
Solar Millennium developed the first parabolic trough power plants in Europe. The
Andasol 1 (Figure C) plant has started its test run in autumn 2008 .The collector surface area of
over 510, 000 square meters (equal to 70 soccer pitches) – it’s the largest existing solar power
plants in the world.
Figure C Andasol Plant, Spain.
Following their construction period of around two years, the Andasol power plant will
supply up to 200,000 people with environmentally friendly solar electricity. They will also
contribute to Spain's supply reliability and, in particular, cover the demand peaks in the Spanish
INFLUENCE December 2015
Department of Mechanical Engineering Page | 57
electricity grid during the summer months. The increased electricity demand is primarily caused
by the high energy consumption of air-conditioning units. Each power plant has an electricity
output of 50 megawatts and operates with thermal storage. A full thermal reservoir can continue
to run the turbines for about 7.5 hours at full-load, even if it rains or long after the sun has set.
The heat reservoirs each comprise of two tanks measuring 14 m in height and 36 m in diameter
and contain molten salt. Each reservoir provides 28,500 tons of storage medium.
References
1. Morin G. Report MAO1-GM-0803-E02
2. G Morin, Werner Platzer Techno-economic system simulation and optimization of solar
thermal power plants
3. Duffie, J.A., and Beckamn, W.A., (2006), Solar Engineering of Thermal Processes, Third
Edition, Editorial John Wiley & Sons, Inc.
4. NREL Report No. NREL/SR-550-27925
5. NREL Report No. NREL/SR-550-27925
6. AndasolProject,Spainwww.solarmillennium.de
7. http://www.trec-uk.org.uk/resources/pictures/stills4.html
INFLUENCE December 2015
Department of Mechanical Engineering Page | 58
13.13.13.13. SOLAR AIRCRAFT: FUTURE NEEDSOLAR AIRCRAFT: FUTURE NEEDSOLAR AIRCRAFT: FUTURE NEEDSOLAR AIRCRAFT: FUTURE NEED
By - S.S.Chavan
Generally domain Aircraft uses conventional fuel. These fuel having limited life, high
cost and pollutant. Also nowadays price of petrol and other fuels are going to be higher, because
of scarcity of those fuels. So there is great demand of use of non-exhaustible unlimited source of
energy like solar energy. Solar aircraft is one of the ways to utilize solar energy. Solar aircraft
uses solar panel to collect the solar radiation for immediate use but it also store the remaining
part for the night flight. Energy comes in different forms. Light is a form of energy. Sun is
source of energy called sunlight. Sunshine is free and never gets used up. Also, there is a lot
of it. The sunlight that heats the Earth in an hour has more energy than the people of the world
use in a year. A little device called a solar cell can make electricity right from sunlight.
The dream of flight powered only by the sun’s energy or sunlight has long motivated
scientists and hobbyists. A solar aircraft is one which collects energy from the sun by means
of photovoltaic solar cells. The energy may be used to drive electric motor to power the
aircraft.
Such airplanes store excess solar energy in batteries for night use. Also there are
rapidly increasing traffic problems in world and in our country also, so it is required to go for
such small solar aircrafts which can be used for transporting goods or materials between
places at short distance. Using solar panels there is more space due to escape of engines and
turbines. Quite a few manned and unmanned solar aircraft have been developed and
flown.
The basic principle is to use solar power by means of aircraft. And this thing can be
done by solar panels which cover the whole surface of wing. This panels converts
radioactive energy into electric energy. This electric energy is used to charge battery which
drives electric motor. Propeller which is mounted on motor shaft produces thrust continuously.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 59
Because of this, aircraft is moved and force is produced on wing by dynamic effect of
air which opposes the downward force of weight. During the night, the only energy
available comes from the battery.
As a transport vehicle to reduce cost and to increase overall profit to reduce fuel cost.
Use of free energy means nothing to invest in transportation of materials and get free
transportation after investing in small aircraft. One main reason of traffic is due to the big and
bulky transporting vehicles so by using small lighter aircraft traffic problem can be reduce. Fast
transportation of materials so some expensive material can also transport in less time.
One kind of solar aircraft is in astrology industries as a transporting vehicle used to
go in space or on neighbor planets. Use of solar aircraft worldwide can make fast
development of countries. Use of Solar Energy as source of energy will increase as
conventional fuels are reducing drastically. Future of aviation field.Future utility to solve
traffic problems. Device used for Astronomical field.
References
[1] Daniel P. Raymer, President, Conceptual Research Corporation, Sylmar, California,
“Aircraft Design: A Conceptual Approach”.
[2] Noth, R. Siegwart, W. Engel, Version 1.1,December 2007, ”Design of Solar Powered
Airplanes for Continuous Flight”.
[3] E. N. Jacobs, K. E. Ward, & R. M. Pinkerton 1933 The characteristic of 78 related airfoils
sections fromtests in the variable-density wind tunnel, NACA Report No. 460.
[4] Moran Jack (2003), An introduction to theoretical and computational aerodynamics, Dover.
P. 7.ISBN 0-486-42879-6.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 60
14.14.14.14. Solar Impulse 2 : Aviation FutureSolar Impulse 2 : Aviation FutureSolar Impulse 2 : Aviation FutureSolar Impulse 2 : Aviation Future
By - S.D.Patil
Solar Impulse 2 is the name of the Swiss long-range experimental solar powered aircraft
project as well as of the operational aircraft which is set to achieve the first round-the-world
solar powered trip. The aircraft is as wide as an Airbus A380. Aircraft is made of lightweight
carbon fibre so it weighs as light as a family car at 2,300kg. The aircraft is capable of flying
five consecutive days without any fuel. It started its journey to circumnavigate the globe in from
Abu Dhabi and was scheduled to return there in August, 2015. The more than 17,000 solar cells
in the aircraft work with the four motorcycle-type engines and power the aircraft to fly at speeds
of 36kmph to 140kmph.
The first aircraft (earlier prototype), often referred to as Solar Impulse 1, was designed
to remain airborne up to 36 hours. It conducted its first test flight in December 2009. In July
2010, it flew an entire diurnal solar cycle, including nearly nine hours of night flying, in a 26
hour flight. Piccard and Borschberg completed successful solar-powered flights from
Switzerland to Spain and then Morocco in 2012, and conducted a multi-stage flight across the
United States in 2013.
A second aircraft, completed in 2014 and named Solar Impulse 2, carries more solar
cells and more powerful motors, among other improvements. In March 2015, Piccard and
Borschberg began an attempt to circumnavigate the globe with Solar Impulse 2, departing from
Abu Dhabi in the United Arab Emirates. The aircraft was scheduled to return to Abu Dhabi in
August 2015, upon the completion of its multi-stage journey. By 1 June 2015, the plane had
traversed Asia. On 3 July 2015, the plane completed the longest leg of its journey, from Japan to
Hawaii. During that leg, however, the aircraft's batteries experienced thermal damage that is
expected to take some time to repair.
So how does this plane work?
Aircraft contains 17,000 solar cells built into a huge 72-meter wingspan, that’s bigger than the
60-meter wingspan of the Boeing 747. (The Solar Impulse 2 also weights just 2,300 kg; the
Boeing 747 has a maximum take-off weight of about 440,000 kg.). Pilots Bertrand Piccard and
Andre Borschberg both Swiss, take turns. One pilot flies one leg, the other meets the flight at the
next stop and takes over controls. They’re supported by a 60-person support team tasked with
anticipating every possible scenario to ensure the plane’s proper function and safety.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 61
Image: Solar Impulse 2
What about night time?
During the day, the solar cells recharge the Solar Impulse 2’s lithium batteries, allowing
the plane to fly at night. The plane does depend on appropriate weather to ensure there’s enough
sunlight for the solar cells to absorb.
Has anyone tried this before?
Yes, at least when it comes to building solar-powered aircraft. Back in the 1970s, an
American alternative energy company called AstroFlight developed an unmanned experimental
aircraft called the AstroFlight Sunrise. After four years of development, the Sunrise finally took
off and flew over a military reservation in California, becoming the world’s first aircraft to fly
on solar power.
According to the Organization of the Petroleum Exporting Countries’ (OPEC’s) world
oil outlook 2014, the demand of the aviation sector in 2011 was 5.1 million barrels of oil
equivalent (MBOE). The aviation sector accounts for about 6% of total oil consumption. In
2010, petrol accounted for 26% while diesel constituted 29% of total global oil consumption.
According to the International Energy Agency, in 2006, aviation represented 11% of all
INFLUENCE December 2015
Department of Mechanical Engineering Page | 62
transport energy use, which will increase to 19% by 2050. Various studies show the demand of
aviation fuel is not only increasing the operational cost (by increasing fuel costs) but is also
having a negative impact on the environment.From 331.6 million passengers in 1971, air traffic
has gone up to almost 3 billion in 2012. The increase is primarily linked to growing tourism,
urbanisation and immigration, which requires faster and more efficient links between cities. All
this will mean higher consumption of non-renewable aviation fuel and hence Solar Impulse 2,
although experimental, is a major breakthrough, as it is the first aircraft to come close to a
perpetual flight powered by a renewable source of energy.
References:
1) The times of India dated july 4, 2015.
2) http://www.solarimpulse.com
3) https://en.wikipedia.org/wiki/Solar_Impulse
4) http://www.bbc.com/news/science-environment-33538442
INFLUENCE December 2015
Department of Mechanical Engineering Page | 63
15.15.15.15. SOLAR OPERATED POWER BANKSOLAR OPERATED POWER BANKSOLAR OPERATED POWER BANKSOLAR OPERATED POWER BANK
By - S.S.Shirguppikar
1. INTRODUCTION
Mobiles play an important role in people’s everyday life and lots of communications are
relayed through a mobile. The mobile charger is important accessory for its trouble free and
efficient usage of various mobile activities. General problem that arises in mobile is quick
discharge. While browsing the features in mobile continuously, battery gets discharged quickly
and in such cases solar powered mobile chargers can be a better alternative to electrical mobile
chargers. The solar charger can be powered via sun with a USB cable or directly to the wall.
The utility of this development is mainly aimed at convenience of the user at following
limitations.
• Not having charging adaptor
• Not having electrical connections.
• Not having sufficient sunlight.
The project outlines the manufacturing of solar charger to charge the mobile using solar energy
instead of electrical energy.
2. PHOTOVOLTAIC CELL
2.1 INTRODUCTION
The early development of solar technologies started in the year 1860s was driven by an
expectation that coal would soon become scarce. However development of solar technologies
stagnated in the early 20th century in the face of the increasing availability, economy, and utility
of coal and petroleum.
The term "photovoltaic" comes from the Greek (photo) meaning "light", and "voltaic",
meaning electric, from the name of the Italian physicist “VOLTA” after whom a unit of electro-
motive force, the volt is named. The term "photo-voltaic" has been in use in English since
1849.In 1839, nineteen-year-old Edmund Becquerel, while experimenting with an electrolytic
cell made up of two metal electrodes found that certain materials would produce small amounts
of electric current when exposed to light. The photovoltaic cell was developed in 1954 at Bell
Laboratories.The highly efficient solar cell was first developed by Daryl ChapinCalvinSouther
Fuller and Gerald Pearson in 1954 using diffused siliconp-n junction.
2.2 PRINCIPLE OF PV CELL
INFLUENCE December 2015
Department of Mechanical Engineering Page | 64
Solar cell works on the principle of photovoltaic effect. Sunlight is composed of photons,
or "packets" of energy. These photons contain various amounts of energy corresponding to the
different wavelengths of light. When photons strike a solar cell, they may be reflected or
absorbed, or they may pass right through. When a photon is absorbed, the energy of the photon
is transferred to an electron in an atom of the cell (which is actually a semiconductor). With its
new found energy, the electron is able to escape from its normal position associated with that
atom to become part of the current in an electrical circuit. Each cell is made of two layers with a
barrier in between them. The first layer (Layer A) contains electrons that are free to move to the
second layer (Layer B). The Layer B wants these electrons more than the first layer. Layer A
electrons will migrate to Layer B automatically, no sunlight needed. Layer B now has the extra
electrons. However, while layer B has a better grip on these electrons that Layer A, it can still
lose them. This is due to the pull of the nucleus on the electrons. So when sunlight hits Layer
B the electrons are dislodged. Their natural response is to try to go back to the positively
charged Layer A; they do this because they can now move.
3. MANUFACTURING OF SOLAR CELLS
3.1.Raw Materials
The basic component of a solar cell is pure silicon, which is not pure in its natural state.
To make solar cells, the raw materials—silicon dioxide of either quartzite gravel or crushed
quartz—are first placed into an electric arc furnace, where a carbon arc is applied to release the
oxygen. The products are carbon dioxide and molten silicon. At this point, the silicon is still not
pure enough to be used for solar cells and requires further purification.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 65
Raw Materials
Pure silicon is derived from such silicon dioxides as quartzite gravel (the purest silica) or
crushed quartz. The resulting pure silicon is then doped (treated with) with phosphorous and
boron to produce an excess of electrons and a deficiency of electrons respectively to make a
semiconductor capable of conducting electricity. The silicon disks are shiny and require an anti-
reflective coating, usually titanium dioxide.The solar module consists of the silicon
semiconductor surrounded by protective material in a metal frame. The protective material
consists of an encapsulant of transparent silicon rubber or butyryl plastic (commonly used in
automobile windshields) bonded around the cells, which are then embedded in ethylene vinyl
acetate. A polyester film (such as Mylar or tedlar) makes up the backing. A glass cover is found
on terrestrial arrays, a lightweight plastic cover on satellite arrays.
3.3.Types of Solar cells
1.Monocrystalline silicon (c-Si): often made using the Czochralski process. Single-crystal wafer
cells tend to be expensive, and because they are cut from cylindrical ingots, do not completely
cover a square solar cell module without a substantial waste of refined silicon. Hence most c-Si
panels have uncovered gaps at the four corners of the cells.
2. Poly- or multicrystalline silicon (poly-Si or mc-Si): made from cast square ingots large
blocks of molten silicon carefully cooled and solidified. Poly-Si cells are less expensive to
produce than single crystal silicon cells, but are less efficient.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 66
Types of solar cells
4.SOLAR MOBILE CHARGER UNIT
Portable solar mobile charger for mobile phone can be charged with sun light, and
electrical power. it stores power from the sun and lets you charge mobile phone, ipod, etc at
your convenience. PV cell converts light into electric current using the photo electric effect. in
photo electric effect electrons emitted from the matter (metals and nonmetals, liquids and
gases)as a consequence of their absorption of energy 4m electromagnetic radiation of very short
wavelength such as ultraviolet or visible light. The photons of light beam have a characteristic
energy determined by the (frequency of light).In the photon emission process if an electron with
in some material absorbs the energy the energy of 1 photon and thus has more energy than the
work function (the electron binding energy) of the material is ejected. If photon energy is low
the electron is unable to escape 4m the material.
Solar Mobile Charger Unit
INFLUENCE December 2015
Department of Mechanical Engineering Page | 67
5. MATERIALS REQUIRED
5.1. SOLAR CELLS
Solar cells
5.2.ACRYLIC SHEET
Cast acrylic sheet is a material with unique physical properties and performance
characteristics. It weighs half as much as the finest optical glass yet is equal to it in clarity and is
up to 17 times more impact resistant.
Acrylic sheet
6. CONCLUSION
1.In solar mobile charger ripples will not be there as we use DC power directly to charge the
mobile.
2.Battery life is more as high voltages are not developed.
3.Versatility of Solar mobile charger is high.
4.Life of the battery will be high as we use solar mobile charger.
References:
1] T. Voigt, H. Ritter,and J. Schiller, “Utilizing solar power in wirelesssensor networks”, Proc.
IEEE Conference on Local ComputerNetworks, 2003.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 68
2] G. Park”Overview of Energy Harvesting Systems (for Low-Power Electronics).
”Presentation at the First Los Alamos National LaboratoryEngineering Institute Workshop:
Energy Harvesting, 2005
3] ZEMAN,M.(s.d.).Fonte:http://ocw.tudelft.nl/courses/microelectronics solar-cells/readings
4] J.A. Paradiso and T. Starner. 2005. “Energy scavenging for mobile andwireless electronics.”
Pervasive Computing4(1):18–27, 2007.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 69
16.16.16.16. A Review on A Review on A Review on A Review on Investigation of Flow through Investigation of Flow through Investigation of Flow through Investigation of Flow through
Centrifugal Pump Impellers Using Computational Fluid Centrifugal Pump Impellers Using Computational Fluid Centrifugal Pump Impellers Using Computational Fluid Centrifugal Pump Impellers Using Computational Fluid
DynamicsDynamicsDynamicsDynamics
By R. R. Gaji
Computational fluid dynamics (CFD) analysis is being increasingly applied in the design
of centrifugal pumps. With the aid of the CFD approach, the complex internal flows in water
pump impellers, which are not fully understood yet, can be well predicted, to speed up the pump
design procedure. Thus, CFD is an important tool for pump designers. With the aid of
computational fluid dynamics, the complex internal flows in water pump impellers can be well
predicted, thus facilitating the design of pumps. This article emphasizes the previous research by
different researchers and scientists in the related field of three-dimensional simulation of internal
flow. A commercial three-dimensional Navier-Stokes code called CFX, with a standard k- ε two-
equation turbulence model was used to simulate the problem under examination. In the
calculation, the finite-volume method and an unstructured grid system were used for the solution
procedure of the discretized governing equations for this problem.
Many CFD studies concerning the complex flow in all types of centrifugal pumps have
been reported. Oh and Ro (2000) used a compressible time marching method, a traditional
SIMPLE method, and a commercial program of CFX-TASC flow to simulate flow pattern
through a water pump and compared the differences among these methods in predicting the
pump’s performance.
Goto (1992) presented a comparison between the measured and computed exit-flow
fields of a mixed flow impeller with various tip clearances, including the shrouded and
unshrouded impellers, and confirmed the applicability of the incompressible version of the
three-dimensional Navier-Stokes code developed by Dawes (1986) for a mixed-flow centrifugal
pump[1].
Zhou and Ng (1998) and Ng and colleagues (1998) also developed a three-dimensional
time-marching, incompressible Navier-Stokes solver using the pseudo compressibility
techniqueto study the flow field through a mixed-flow water-pump impeller. The applicability of
the original code was validated by comparing it with many published experimental and
computational results.
Recently, Kaupert and colleagues (1996), Potts and Newton (1998), and Sun and
Tsukamoto (2001) studied pump off-design performance using the commercial software CFX-
TASCflow, FLUENT, and STARCD, respectively. Although these researchers predicted reverse
INFLUENCE December 2015
Department of Mechanical Engineering Page | 70
flow in the impeller shroud region at small flow rates numerically, some contradictions still
existed. For example, Kaupert’s experiments showed the simultaneous appearance of shroud-
side reverse flow at the impeller inlet and outlet, but his CFD results failed to predict the
numerical outlet reverse flow. Sun and Tsukamoto (2001) validated the predicted results of the
head-flow curves, diffuser inlet pressure distribution, and impeller radial forces by revealing the
experimental data over the entire flow range, and they predicted back flow at small flow rates,
but they did not show an exact back-flow pattern along the impeller outlet.
From such literature, it was found that most previous research, especially research based
on numerical approaches, had focused on the design or near-design state of pumps. Few efforts
were made to study the off-design performance of pumps. Centrifugal pumps are widely used in
many applications, so the pump system may be required to operate over a wide flow range in
some special applications. Thus, knowledge about off-design pump performance is a necessity.
On the other hand, it was found that few researchers had compared flow and pressure fields
among different types of pumps. Therefore, there is still a lot of work to be done in these fields.
References:
1. Goto, A. 1992. Study of internal flows in a mixed-flow pump impeller at various tip
clearances using three-dimensional viscous flow computations. ASME Journal of Turbo
machinery 114:373–382.
2. Oh, J. S., and Ro, S. H. 2000. Application of time marching method to incompressible
centrifugal pump flow, 219–225. Proceedings ofthe 2nd International Symposium on Fluid
Machinery and FluidEngineering. Beijing: Tsinghua University Press.
3. Kaupert, K. A., Holbein, P., and Staubli, T. 1996.Afirst analysis of flow field hysteresis in a
pump impeller. Journal of Fluids Engineering 118:685–691.
4. Zhou, W. D., and Ng, E. Y. K. 1998. 3-D viscous flow simulation of mixed-flowwater pump
impeller with tip-clearance effects, Proceedingsof the 4th International Conference and
Exhibition on Pumpsand Systems. pp. 189–198. Singapore: HQ Link Pte Ltd.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 71
11117.7.7.7. COOLING RATE ENHANCEMENT BY MAGNETIC COOLING RATE ENHANCEMENT BY MAGNETIC COOLING RATE ENHANCEMENT BY MAGNETIC COOLING RATE ENHANCEMENT BY MAGNETIC
NANOFLUIDNANOFLUIDNANOFLUIDNANOFLUID
By - P.B.Patil
Most cooling systems remove excess heat by using water pumped through pipes. Some
pipes are designed to include fins or grooves on the pipe surfaces to increase surface area for
greater heat transfer, but these design features increase manufacturing costs. Water can also be
pumped through the system at a faster speed to enhance heat transfer, but drawbacks include
higher energy costs and a greater pressure drop in the system.
To find a better way to reduce heat in cooling systems, especially for nuclear power
facilities, researchers investigated how magnetic Nanofluidaffect heat-transfer rates in a flowing
system.
Concept of magnetic field
Lin-Wen, Jacopo Buongiorno,RezaAzizian, of MIT’s Nuclear Reactor Laboratory
conducted a successful experiment where demonstrated that heat transfer coefficients of
magnetite Nanofluid were increased up to 300% when a local magnetic field was applied. These
impressive results indicate this type of approach could be a highly effective, low-cost way to
eliminate hotspots in cooling pipes, which can sometimes lead to system failures.
The experimental design consist magnetite nanofluids of colloidal magnetite nano
particles suspended in a base fluid. The main interest in using nanofluids in thermal engineering
systems is that their enhanced thermo physical properties (such as thermal conductivity), relative
to the base fluid, can improve thermal management in the system. In a typical Nanofluid, the
nano particles are uniformly dispersed. In a solution of magnetic nano particles, however, the
particles can be controlled using an external magnetic field, which enhances their thermal
conductivity.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 72
In absence of an external magnetic field, the heat transfer characteristics of the flowing
magnetite nanofluid can be predicted by classical correlations.
Experimental setup
The experimental setup consisted of a closed-loop flow system equipped with a pump,
flow meter, heat exchanger, thermocouples, and pressure transducer. The test section in the flow
system was fabricated from stainless steel tube. Eleven K-type thermocouples were evenly
distributed and connected to the outer wall of the tubing along the test section. A constant heat
flux was provided across the test section, which was well insulated to minimize heat loss. The
fluid (either de-ionized water or nanofluid) was pumped through the system and heated up by a
constant heat flux as it passed through the test section. The fluid then returned to an
accumulator, where a heat exchanger maintained the fluid at a constant temperature. NdFeB,
grade 42 block permanent magnets were used to generate magnetic fields along the test section.
Measurements showed that the local heat transfer coefficient of magnetite nanofluids
increased up to 300% when a magnetic field was applied locally. The amount of increase was
found to be a function of flow rate, magnetic field strength, and gradient. It indicates the
magnets attract the particles closer to the heated surface of the tube, greatly enhancing the
transfer of heat from the fluid. Without the magnets in place, the low-concentration magnetite
nanofluid behaves just like water, with no change in its cooling properties.
References:
1) www.asme.org/engineering
2) www.nanotechetc.com
3) www.mit.edu
INFLUENCE December 2015
Department of Mechanical Engineering Page | 73
18.18.18.18. Cryo desalination: A Refrigeration solution for Cryo desalination: A Refrigeration solution for Cryo desalination: A Refrigeration solution for Cryo desalination: A Refrigeration solution for
thirsty worldthirsty worldthirsty worldthirsty world
By - J.S.Jadhav
Life on our planet requires fresh water. According to WHO, water scarcity affects
roughly one-third of the world’s population, and approximately 2.3 billion people. The World
Water Council states that this water crisis will become more acute over the next fifty years as
the world population increases. Yet, water covers nearly three quarters of our planet -- 97.5% of
this water is saltwater. Therefore, a practical, economically viable desalination process is
crucial to overcoming this crisis.
Introduction
Present practices in Desalination
Desalination refers to a water treatment process that separates water from salt solution. There are
different techniques available to do the same, which includes following:
Today, the most commonly used desalination processes are variations of thermal processes
(multistage distillation and vapor compression) and membrane processes (reverse osmosis).
These systems have steadily evolved and improved in performance over the years but have not
yet attained the elusive goals of environmental friendliness and low operating costs needed by a
thirsty world. They have systemic problems: polluting chemicals, membrane fouling, capacity
limitations, expensive construction materials, and high-energy demands.
Cryo desalination: Actual process
INFLUENCE December 2015
Department of Mechanical Engineering Page | 74
Desalination by freezing processes is based on the fact that, ice crystals formed are made up of
essentially pure water when the temperature of saline water is lowered to its freezing point and
further heat is removed leaving behind brine solution. When in contact with a liquid, these
crystals become slush. Brine adheres to the crystals and becomes trapped. Complete removal of
brine from slush is difficult to achieve. As a result, when the ice melts becoming water, the
residual brine in the slush makes the water salty. This is the reason freeze desalination failed in
the past.
The problem of separating ice from brine is solved by a method never previously used in freeze
desalination: floatation.
Simply stated, the method works as follows: ice floats on water. Many fluids also float on water.
By selecting a fluid that floats in-between ice and brine we can effectively separate the ice from
the brine. Harvesting this separated ice produces fresh water.
The approach overcomes this problem and successfully achieves ice-brine separation by
interaction of oil, brine, and ice in a separation column. Cryo Desalination is highly energy
efficient. The efficiency approaches the thermodynamic minimum. This efficiency is achieved
by using the “cold energy” in the ice to condense the refrigerant vapors, thereby recapturing a
substantial portion of the energy expended to make the ice.
The credit goes to scientist Norbert Bucshbaum a retired chemical engineer, who has got the
patent on April 15, 2014 for his idea.
References:
[1] Braj M. Misra and Himangshu K. Sadhukhan, “Desalination and Water Reuse in India -
An Overview”, History, Development And Management Of Water Resources – Vol. II
[2] Z. Lu and L. Xu“Freezing Desalination Process”Thermal Desalination Processes – Vol. II
[3] R Clayton, “A review of current knowledge: desalination for water supply”, third edition,
June 2015, pp 19-26
[4] www.crydesalination.com
INFLUENCE December 2015
Department of Mechanical Engineering Page | 75
19.19.19.19. Enhancement of Heat Transfer Rate from Notched Enhancement of Heat Transfer Rate from Notched Enhancement of Heat Transfer Rate from Notched Enhancement of Heat Transfer Rate from Notched
Fins by Considering Aspect RatioFins by Considering Aspect RatioFins by Considering Aspect RatioFins by Considering Aspect Ratio
By - P.V. Mali
Abstract:
Recent development in technology has led to demand for high rate of heat dissipation
with light weight and Compact heat transfer component. To achieve this demand finned surfaces
are used to increase the heat transfer rate. Excessive heat must be dissipated to the surrounding
for optimum performance of the system. This is more important in cooling of internal
combustion engine head, thermal power plants, and electronic circuits’ tec. Now days this
component is getting more compact which generates heat continuously. Excessive heat reduces
the life of the component. To avoid this there is need of effective cooling system. By using fins
the turbulence occurred is good enough to increase rate of heat transfer.
Area of interest: Effect of Aspect ratio (Depth of the Notch) on rectangular notched fin through
natural convection
Introduction
Different types of fins ware used to increase the heat transfer rate. The fin shapes used was
rectangular, v shapes, triangular, trapezoidal and circular. Some researchers were used without
notch fins and some uses notched fins shown in Fig. 1. They were also used different shapes of
notches such as rectangular, v-shapes, triangular, trapezoidal and circular. The heat transfer rate
through notched fins was more than without notched fins. According to SenolBaskaya and
Murat Ozek, they did the parametric study of fins. The material used for analysis was aluminum.
They studied each of the variables of fin spacing, height, and length and temperature difference
produces an effect on the overall heat transfer.
Mathematical calculation is made to carry out the investigation on horizontal rectangular
fin array with and without notch. The objective of the work is to determine the heat transfer
characteristics theoretically, and further to find out the enhancement in heat transfer in the case
Fig.1.Different Shapes of Notched Fins
INFLUENCE December 2015
Department of Mechanical Engineering Page | 76
of notched fin arrays over normal fin arrays. In the lengthwise short arrays where single
chimney flow pattern is present, the central portion of the fin flat near the base becomes
ineffective. This is due to relatively hot air coming in contact with the fin portion. The air
entering from the two sides, flow lengthwise gets heated up and hence has a tendency to go up
forming flow pattern of single chimney. For conducting experiment, fin arrays were formed by
assembling fin flats and separate spacers and tied together by tie bolts and nuts. Fin flats are
separated by spacers forming fin channels. Insulating bricks are used to guard the leakage of
heat from bottom and sides of the fin array. From the preliminary work it is found out that the
insulating bricks are quite effective in preventing the leakage of heat and hence it was decided
not to go for additional guard heaters. However the side and bottom heat loss was measured and
accounted.
Conclusion:
Excessive heat reduces the life of the component. To avoid this there is need of effective
cooling system. By using fins the turbulence occurred is good enough to increase rate of heat
transfer.
References:
1. Baskaya S., Sivrioglu M., and Ozek M., 2000, Parametric study of natural convection
heat transfer from horizontal rectangular fin arrays, Int. J. Thermal Science, 39, 797–
805.
2. Sane N.K., Sane S.S., and Parishwad G.V.,2006, Natural Convention Heat Transfer
Enhancement in Horizontal Rectangular Fin Array with Inverted Notch, 18th National &
7th ISHMT-ASME Heat and mass Transfer Conference IIT, Guwahati, 312- 317.
3. Mechanical Engineers Handbook, 2006, Energy and Power, Volume 4, Third Edition
4. Yeh R. H.,1997, Optimum Spacings of longitudinal Convecting Fin Arrays, Journal of
marine science and technology, vol. 5, No. 1, pp. 47-53
5. Cohen A. B.,2009, Design of Optimum Plate-Fin Natural Convective Heat Sinks, ASME,
vol. 125, pp. 208-215
6. Cohen A. B., 2002, Design and optimization of air-cooled heat sinks for sustainable
development , thermal challenges in next generation electronics systems, joshi and
garimella (ads),Rotterdam
INFLUENCE December 2015
Department of Mechanical Engineering Page | 77
7. Alessa A. H., Maqableh A. M., 2009, Enhancement of Natural Convection heat transfer
from a fin by rectangular perforation with aspect ratio of two, Int. J. of Physical Science
Vol. 4, pp. 540-547.
8. Alessa A. H., Maqableh A. M., 2008, Enhancement of Natural Convection heat transfer
from a fin by Triangular perforation of basis parallel and toward its tip, Int. J. of
Applied Mathamatics of Mechanics, edition 29(8), pp. 1033-1044.
9. Surywanshi S. D., Sane N. K., 2009,Natural Convection Heat Transfer From Horizontal
Rectangular, Int. J. Heat Transfer, ASME, Vol. 131
10. Gaurav K., Kamal R., Ankur D.,2014, Experimental Investigation of Natural convection
from Heated Triangular Fin Array within a Rectangular Enclosure, Int. J. of Applied
Science Engineering Research, vol. 4, pp. 203-210.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 78
20.20.20.20. Heat pipes in solar collectors: Overview and basicsHeat pipes in solar collectors: Overview and basicsHeat pipes in solar collectors: Overview and basicsHeat pipes in solar collectors: Overview and basics
By - S. A. Urunkar,
INTRODUCTION
Solar thermal is one of the most cost-effective renewable energy technologies, and solar
water heating is one of the most popular solar thermal systems.
Solar water heaters (SWHs) have been tested since the early 1970’s. Classical solar
collectors were of the tube-in-fin flat plate types. Since then new developments and innovations
have resulted in more efficient collectors with evacuated tubes [1], U-tubes [2] and the heat pipe
[3] system.
Evacuated tubes with selective surfaces minimize convective heat losses from the
absorber tubes and reduce emissive heat losses. Test standards have also evolved to include
indoor performance testing of the solar collectors [4] to outdoor system performance [5]. Huang
[6] and Chang [7] suggested modifications to the Taiwanese standard [8].
Fig. 1 Heat pipes in flat plate collector
STRUCTURE AND PRINCIPLE
The heat pipe is hollow with the space inside evacuated, much the same as the solar tube.
In this case insulation is not the goal, but rather to alter the state of the liquid inside. Inside the
heat pipe is a small quantity of purified water and some special additives. At sea level water
boils at 100oC, but if you climb to the top of a mountain the boiling temperature will be less than
100oC. This is due to the difference in air pressure.
Based on this principle of water boiling at a lower temperature with decreased air
pressure, by evacuating the heat pipe, we can achieve the same result. The heat pipes used in AP
solar collectors have a boiling point of only 30oC. So when the heat pipe is heated above 30
oC
the water vaporizes. This vapour rapidly rises to the top of the heat pipe transferring heat. As the
INFLUENCE December 2015
Department of Mechanical Engineering Page | 79
heat is lost at the condenser (top), the vapour condenses to form a liquid (water) and returns to
the bottom of the heat pipe to once again repeat the process.
At room temperature the water forms a small ball, much like mercury does when poured
out on a flat surface at room temperature. When the heat pipe is shaken, the ball of water can be
heard rattling inside. Although it is just water, it sounds like a piece of metal rattling inside.
This explanation makes heat pipes sound very simple. A hollow copper pipe with a little
bit of water inside, and the air sucked out! Correct, but in order to achieve this result more than
20 manufacturing procedures are required and with strict quality control.
QUALITY CONTROL
Material quality and cleaning is extremely important to the creation of a good quality
heat pipe. If there are any impurities inside the heat pipe it will affect the performance. The
purity of the copper itself must also be very high, containing only trace amounts of oxygen and
other elements. If the copper contains too much oxygen or other elements, they will leach out
into the vacuum forming a pocket of air in the top of the heat pipe. This has the effect of moving
the heat pipe's hottest point (of the heat condenser end) downward away from the condenser.
This is obviously detrimental to performance, hence the need to use only very high purity
copper.
Often heat pipes use a wick or capillary system to aid the flow of the liquid, but for the
heat pipes used in Sunrain solar collectors no such system is required as the interior surface of
the copper is extremely smooth, allowing efficient flow of the liquid back to the bottom[9]. Also
Sunrain heat pipes are not installed horizontally. Heat pipes can be designed to transfer heat
horizontally, but the cost is much higher.
Fig.02 Heat pipe condenser
INFLUENCE December 2015
Department of Mechanical Engineering Page | 80
The heat pipe used in Sunrain solar collectors comprises two copper components, the
shaft and the condenser. Prior to evacuation, the condenser is brazed to the shaft [9]. Note that
the condenser has a much larger diameter than the shaft; this is to provide a large surface area
over which heat transfer to the header can occur. The copper used is oxygen free copper, thus
ensuring excellent life span and performance.
Each heat pipe is tested for heat transfer performance and exposed to 250oC temperatures
prior to being approved for use. For this reason the copper heat pipes are relatively soft. Heat
pipes that are very stiff have not been exposed to such stringent quality testing, and may form an
air pocket in the top over time, thus greatly reducing heat transfer performance.
Compared to direct flow collectors, the use of heat pipes in collectors offers the
advantage of a simpler hydraulic interconnection of the solar circuit with a lower pressure drop
while reducing the system load during stagnation. The use of heat pipes can therefore lead to
simpler and more reliable solar heating systems. The optimization potential of the technology
and thus the potential benefits, however, are not maxed out or even unknown.
Within the research project carried out at Institute for Solar Energy Research Hamelin
(ISFH) "Heat pipes in solar collectors - principles of thermodynamics, evaluation and new
approaches for integration" heat pipe solutions in collectors have been analyzed basically, design
methods and optimization potentials have been developed, and the integration of heat pipes in
flat plate collectors has been investigated [10].
CONCLUSION
Basically the approach of stagnation temperature limiting collectors provides the transfer
of complexity from the solar circuit into the collector. By using the dry out limitation of
gravitational heat pipes with organic working fluids, temperatures in the solar circuit can be
significantly reduced and vapor formation is avoided in case of stagnation. This can lead to
potential system simplifications.
Within exemplary considerations it has been shown, that a system installation can be
significantly cheaper by using stagnation temperature limiting collectors. For small systems (one
and two family dwelling units) we expect a cost reduction potential for the solar system of about
25%. The use of collectors with heat pipes can therefore lead to savings that go far beyond the
potential savings of cheaper collectors. The systems are simpler, which relieves the installer and
reliable, which lowers the risk. Heat pipes in panels thus represent a potential key technology for
simplification and cost reduction of solar thermal systems.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 81
References
[1] Morrison, G. L. and Tran, N. H., “Long term performance of evacuated tubular solar water
heaters in Sydney, Australia”, Solar Energy, 32, 785-791, 1984.
[2] Ma, L., Lu, Z, Zhang, J. and Liang, R.,“Thermal performance of the glass evacuated tube
solar collector with U-tube”, Building and Environment, 1959-1967, 2010.
[3] Chun, W., Kang, Y. H., Kwak, H. Y. and Lee, Y. S., “An experimental study of the utilization
of heat pipes for solar water heaters”, Applied Thermal Engineering, 19, 807-817, 1999.
[4] AS/NZS 4445.1: 1997. Solar Heating –Domestic water heating systems – Part
1:Performance rating procedure using indoor test methods, 1997.
[5] AS/NZS 2535.1:2007. Test Methods for solar collectors - Part 1: Thermal performance of
glazed liquid heating collectors including pressure drop, 2007.
[6] Huang, B. J., “Performance rating method of thermosyphon solar water heaters”, Solar
Energy, 50, 435-440, 1993.
[7] Chang, J. M., Leu, J. S., Shen, M. C. and Huang, B. J., “A proposed modified efficiency for
thermosyphon solar heating systems”, Solar Energy, 76, 693-701, 2004.
[8] CNS Standard B7277, No. 12558, 1989. Method of test for solar water heating
systems.Central Bureau of Standard, Ministry of Economic Affairs, Taiwan [In Chinese], 1989.
[9] http://en.sunrain.com/basic/What-is-a-Heat-Pipe.shtml
[10] http://www.isfh.de/institut_solarforsc hung/waermerohre-in-sonnenkollektoren. php?_1=1
INFLUENCE December 2015
Department of Mechanical Engineering Page | 82
21.21.21.21. HYDROGEN FUELLED IC ENGINE HYDROGEN FUELLED IC ENGINE HYDROGEN FUELLED IC ENGINE HYDROGEN FUELLED IC ENGINE –––– AN OVERVIEW AN OVERVIEW AN OVERVIEW AN OVERVIEW
By - V.S.Jadhav
The incentives for a hydrogen economy are the emissions, the potentially CO2 -free use,
the sustainability and the energy security. In this article the focus is on the use of hydrogen in
internal combustion engines (ICE), or more precisely, hydrogen fuelled spark ignition (SI)
engines. When talking about hydrogen as a fuel for traffic applications, most people make the
link to fuel cells. Why? Why not a more realistic link to internal combustion engines? At the
moment the estimation of the number of motor vehicles is about 800 million. To replace them in
a relatively short time by fuel cells is impossible. There are several reasons for converting the
gasoline, diesel or natural gas engines to hydrogen fuelled internal combustion engines. ICEs are
proven technology, are simple and well-known and the adaptations can be made with a low cost.
During the transition period bi-fuel solutions are possible (to run the engine either on gasoline or
pure hydrogen). For larger engines (buses, trucks) mixtures of natural gas with hydrogen (about
20%) are easy to exploit. During this transition period, experience can be gained with the
production, storage and infrastructure of hydrogen.
Currently the hydrogen production is the cheapest through the steam reforming of
methane, but CO2emissions cannot be avoided. Renewable energy, e.g. solar power,
hydroelectric, tidal etc., can give “CO2 -free” electricity to electrolyze water to hydrogen. The
downside is that these electricity costs are mostly expensive. Interesting is also the application of
peak shaving of wind turbine power. Other possibilities are solar thermal, biomass, bacterial etc.
Several solutions are possible for the hydrogen storage. Liquid storage gives a high mass density
but asks a high energy demand. Mostly used is the compressed storage, vessels with a
compression pressure of 350 bar are homologated and up to 700 bar are demonstrate.
I. HYDROGEN IC ENGINES – FOUR GENERATIONS
There are four generations in the development of hydrogen fuelled engines. In the first
generation a gas venturi is used. With a gas carburetor a large volume of combustible mixture is
in the inlet manifold. To avoid backfire (an explosion in the inlet manifold before the inlet valve
closes), the engine has to run lean (λ ≥ 2) which results in a low power output. For the second
generation the same technologies are used as for gasoline SI engines: multipoint sequential
(port) injection and electronic engine control. A possible strategy is then to use a late injection
so that the admitted air will cool the inlet manifold and the combustion chamber before the
injection of hydrogen. These injectors are now currently commercially on the market (after a
delay of introduction due to the high volume of a low density gas to inject in a short time). Even
INFLUENCE December 2015
Department of Mechanical Engineering Page | 83
with a late injection a stoichiometric mixture (λ = 1) is not always possible and the power output
is lower than a corresponding gasoline engine, see e.g. Ford’s results reported by Tang et al.
(2002). For the third generation, at high loads, the mixture is kept stoichiometric (λ = 1). To
avoid backfire, exhaust gas recirculation (EGR) is used. At this stoichiometric mixture a three
way catalyst (TWC) can be used to decrease the NOx emissions. And with turbo/supercharging
and inter cooling the same or a higher power output is obtained as for a gasoline engine. Finally
for the fourth generation, research is going on into direct injection of hydrogen in SI engines.
II. DESIGN FEATURES OF DEDICATED HYDROGEN ENGINE
Here, an overview is given of the design features in which a dedicated hydrogen engine
differs from traditionally fuelled engines,
A. Abnormal combustion
The suppression of abnormal combustion in hydrogen engines has proven to be quite a
challenge and measures taken to avoid abnormal combustion have important implications for
engine design, mixture formation and load control. For spark- ignition engines, three regimes of
abnormal combustion exist: knock (auto-ignition of the end gas region), pre-ignition
(uncontrolled ignition induced by a hot spot, premature to the spark ignition) and backfire (also
referred to as backflash, flashback and induction ignition, this is a premature ignition during the
intake stroke, which could be seen as an early form of pre-ignition). Backfire has been a
particularly tenacious obstacle to the development of hydrogen engines. The causes cited for
backfire are hot spots in the combustion chamber, deposits and particulates , Residual energy in
the ignition circuit , Induction in the ignition cable ,Combustion in the piston top land persisting
up to inlet valve opening time and igniting the fresh charge. All causes itemized above can result
in backfire and the design of a hydrogen engine should try to avoid them, as engine conditions
different from normal operation are always a possibility.
B. Air- Fuel Mixture formation
A range of mixture formation methods has been tested for hydrogen engines, mostly in
the pursuit of backfire-free operation external mixture formation with a gas carburetor , External
mixture formation with `parallel induction', that is: some means of delaying the introduction of
hydrogen, e.g. a fuel line closed by a separate valve on top of the intake valve that only opens
when the intake valve has lifted enough ,External mixture formation with a gas carburetor and
water sometimes with additional exhaust gas recirculation (EGR), External mixture formation
with timed manifold or port fuel injection (PFI, sometimes also with some means of ‘parallel
induction’ , Internal mixture formation through direct injection (DI).During the last decade, only
timed port injection and direct injection (during the compression stroke or later) have been used,
INFLUENCE December 2015
Department of Mechanical Engineering Page | 84
as the other methods are less flexible and less controllable. External mixture formation by means
of port fuel injection has been demonstrated to result in higher engine efficiencies, extended lean
operation, lower cyclic variation and lower NOx production compared to direct injection. An
important advantage of DI over PFI is the impossibility of backfire. This too increases the
maximum power output of DI compared to PFI as richer mixtures can be used without fear of
backfire. Pre-ignition can still occur though, unless very late injection is used.
C. Load control strategies
Hydrogen is a very versatile fuel when it comes to load control. The high flame speeds
of hydrogen mixtures and its wide flammability limits permit very lean operation and substantial
dilution. The engine efficiency and the emission of NOx are the two main parameters used to
decide the load control strategy. Constant equivalence ratio throttled operation has been used but
mainly for demonstration as it is fairly easy to run a lean burn throttled hydrogen engine
(accepting the severe power output penalty). Where possible, wide open throttle (WOT)
operation is used to take advantage of the associated increase in engine efficiency, so regulating
load with mixture richness (qualitative control) instead of volumetric efficiency (quantitative
control) and thus avoiding pumping losses. Across the load range of the engine, different
strategies, which try to make as much advantage as possible of the properties of the hydrogen-air
mixture, can be used. It is important to know that NOx production is very dependent on the
mixture richness, the air-to-fuel equivalence ratio λ, as this is the major parameter controlling
the maximum combustion temperature. At lean mixtures NOx production is very low until a
certain λ is reached, the so-called ‘NOx formation limit’.
A mixture richer than this limit, which is normally around λ = 2, will produce high levels
of NOx and a maximum will be reached at about λ = 1.3. So, for loads below this ‘NOx
formation limit’, a quality-based mixture control will be used. For idling and very low loads the
mixture has to be very lean
III. ADVANTAGES OF HYDROGEN FOR SPARK IGNITION ENGINES
Fig.1 gives the flammability limits for different fuels at normal temperature and pressure.
As can be seen the flammability limits (= possible mixture compositions for ignition and flame
propagation) are very wide for hydrogen (between 4 and 75% hydrogen in the mixture)
compared to gasoline (between 1 and 7.6%). This means that the load of the engine can be
controlled by the air to fuel ratio, as for diesel engines. Nearly all the time the engine can be run
with a wide open throttle, resulting in a higher efficiency. The second advantage of hydrogen for
SI engines is the high burning velocity. For near- stoichiometric mixtures (near λ = 1/ φ = 1) the
INFLUENCE December 2015
Department of Mechanical Engineering Page | 85
combustion is almost a constant-volume combustion, which increases the (thermodynamic)
efficiency.
Fig 1: Flammability limits for air with hydrogen (H2 ), natural gas (CH4) and air with
gasoline
Also the properties of lean hydrogen flames will cause flame acceleration due to
cellularity and no turbulence enhancing methods have to be used (swirl ports, etc.). Again this
increases the efficiency of the engine. Furthermore, hydrogen has a high octane number and the
compression ratio of the engine can be increased. This, of course, increases the efficiency.
Finally the emissions of a hydrogen engine are very clean, only the noxious component NOx is
emitted.
References
[1] Bardon M.F. and Haycock R.G.: The hydrogen research of R.O. King, Proceedings, 14th
World Hydrogen Energy Conference, invited paper, Montreal, Canada, (2002).
[2] Berckmüller M. et al.: Potentials of a charged SI-hydrogen engine. SAE, paper nr 2003-
01-3210, (2003).
[3] Das L.M.: Near-term introduction of hydrogen engines for automotive and agricultural
application. International Journal of Hydrogen Energy, 27, 479–487, (2002).
[4] Gerbig F. et al.: Potentials of the hydrogen combustion engine with innovative
hydrogen-specific combustion process, Proceedings, Fisita World Automotive Congress,
paper nr F2004V113, Barcelona, Spain, (2004).
[5] Sierens R. and Verhelst S.: Influence of the injection parameters on the efficiency and
power output of a hydrogen fueled engine, Journal of Engineering for Gas Turbines and
Power, 125, 444-449, (2003).
[6] Sierens R., Verhelst S., Verstraeten S.: EGR and lean combustion strategies for a single
cylinder hydrogen fuelled IC engine, Proceedings, EAEC European Automotive
Congress, Belgrado, (2005)
INFLUENCE December 2015
Department of Mechanical Engineering Page | 86
22.22.22.22. Recent Advances in Heat Transfer EnhancementsRecent Advances in Heat Transfer EnhancementsRecent Advances in Heat Transfer EnhancementsRecent Advances in Heat Transfer Enhancements
By - A.R.Mane
Abstract:
Different heat transfer enhancers are reviewed. They are (a) fins and microfins, (b)
porous media, (c) large particles suspensions, (d) nanofluids, (e) phase-change devices, (f)
flexible seals, (g) flexible complex seals, (h) vortex generators, (i) protrusions, and (j) ultra high
thermal conductivity composite materials. Most of heat transfer augmentation methods
presented in the literature that assists fins and microfins in enhancing heat transfer are
reviewed. Among these are using joint-fins, fin roots, fin networks, biconvections, permeable
fins, porous fins, capsulated liquid metal fins, and helical microfins. It is found that not much
agreement exists between works of the different authors regarding single phase heat transfer
augmented with microfins. However, too many works having sufficient agreements have been
done in the case of two phase heat transfer augmented with microfins. With respect to
nanofluids, there are still many conflicts among the published works about both heat transfer
enhancement levels and the corresponding mechanisms of augmentations. The reasons beyond
these conflicts are reviewed. In addition, this paper describes flow and heat transfer in porous
media as a well-modeled passive enhancement method. It is found that there are very few works
which dealt with heat transfer enhancements using systems supported with flexible/flexible-
complex seals. Eventually, many recent works related to passive augmentations of heat transfer
using vortex generators, protrusions, and ultra high thermal conductivity composite material
are reviewed. Finally, theoretical enhancement factors along with many heat transfer
correlations are presented in this paper for each enhancer.
Introduction:
The way to improve heat transfer performance is referred to as heat transfer enhancement
(or augmentation or intensification). Nowadays, a significant number of thermal engineering
researchers are seeking for new enhancing heat transfer methods between surfaces and the
surrounding fluid. Due to this fact, Bergles [1, 2] classified the mechanisms of enhancing heat
transfer as active or passive methods. Those which require external power to maintain the
enhancement mechanism are named active methods. Examples of active enhancement methods
are well stirring the fluid or vibrating the surface [3]. Hagge and Junkhan [4] described various
active mechanical enhancing methods that can be used to enhance heat transfer. On the other
hand, the passive enhancement methods are those which do not require external power to sustain
the enhancements’ characteristics. Examples of passive enhancing methods are: (a) treated
INFLUENCE December 2015
Department of Mechanical Engineering Page | 87
surfaces, (b) rough surfaces, (c) extended surfaces, (d) displaced enhancement devices, (e) swirl
flow devices, (f) coiled tubes, (g) surface tension devices, (h) additives for fluids, and many
others.
Mechanisms of Augmentation of Heat Transfer
To the best knowledge of the authors, the mechanisms of heat transfer enhancement can be at
least one of the following.
(1)Use of a secondary heat transfer surface.
(2)Disruption of the unenhanced fluid velocity.
(3)Disruption of the laminar sublayer in the turbulent boundary layer.
(4)Introducing secondary flows.
(5)Promoting boundary-layer separation.
(6)Promoting flow attachment/reattachment.
(7)Enhancing effective thermal conductivity of the fluid under static conditions.
(8)Enhancing effective thermal conductivity of the fluid under dynamic conditions.
(9)Delaying the boundary layer development.
(10)Thermal dispersion.
(11)Increasing the order of the fluid molecules.
(12)Redistribution of the flow.
(13)Modification of radiative property of the convective medium.
(14)Increasing the difference between the surface and fluid temperatures.
(15)Increasing fluid flow rate passively.
(16)Increasing the thermal conductivity of the solid phase using special nanotechnology
fabrications.
Methods using mechanisms no. (1) and no. (2) include increasing the surface area in
contact with the fluid to be heated or cooled by using fins, intentionally promoting turbulence in
the wall zone employing surface roughness and tall/short fins, and inducing secondary flows by
creating swirl flow through the use of helical/spiral fin geometry and twisted tapes. This tends to
increase the effective flow length of the fluid through the tube, which increases heat transfer but
also the pressure drop. For internal helical fins however, the effect of swirl tends to decrease or
INFLUENCE December 2015
Department of Mechanical Engineering Page | 88
vanish all together at higher helix angles since the fluid flow then simply passes axially over the
fins [5]. On the other hand, for twisted tape inserts, the main contribution to the heat transfer
augmentation is due to the effect of the induced swirl. Due to the form drag and increased
turbulence caused by the disruption, the pressure drop with flow inside an enhanced tube always
exceeds that obtained with a plain tube for the same length, flow rate, and diameter.
Turbulent flow in a tube exhibits a low-velocity flow region immediately adjacent to the
wall, known as the laminar sublayer, with velocity approaching zero at the wall. Most of the
thermal resistance occurs in this low-velocity region. Any roughness or enhancement technique
that disturbs the laminar sublayer will enhance the heat transfer [6]. For example, in a smooth
tube of 25.4mm inside diameter, at Re = 30,000, the laminar sublayer thickness is only
0.0762mm under fully developed flow conditions. The internal roughness of the tube surface is
well-known to increase the turbulent heat transfer coefficient. Therefore, for the example at
hand, an enhancement technique employing a roughness or fin element of height ~ 0.07mm will
disrupt the laminar sublayer and will thus enhance the heat transfer. Accordingly, mechanism
no. (3) is a particularly important heat transfer mechanism for augmenting heat transfer.
Li et al. [5] described the flow structure in helically finned tubes using flow visualization
by means of high-speed photography employing the hydrogen bubble technique. They used four
tubes with rounded ribs having helix angles between 38° and 80° and one or three fin starts, in
their investigation. Photographs taken by them showed that in laminar flow, bubbles follow
parabolic patterns whereas in the turbulent flow, these patterns break down because of random
separation vortices. Also, for tubes with helical ridges, transition to turbulent flow was observed
at lower Reynolds numbers compared to smooth tube values. Although swirl flow was observed
for all tubes in the turbulent flow regime, the effect of the swirl was observed to decrease at
higher helix angles. Li et al. [5] concluded that spiral flow and boundary-layer separation flow
both occurred in helical-ridging tubes, but with different intensities in tubes having different
configurations. As such, mechanisms no. (4) and no. (5) are also important heat transfer
mechanisms for augmenting heat transfer.
Arman and Rabas [7] discussed the turbulent flow structure as the flow passes over a
two-dimensional transverse rib. They identified the various flow separation and
reattachment/redevelopment regions as: (a) a small recirculation region in front of the rib, (b) a
recirculation region after the rib, (c) a boundary layer reattachment/redevelopment region on the
downstream surface, and finally (d) flow up and over the subsequent rib. The authors noting that
recirculation eddies are formed above these flow regions, identified two peaks that occur in the
local heat transfer-one at the top of the rib and the other in the downstream recirculation zone
INFLUENCE December 2015
Department of Mechanical Engineering Page | 89
just before the reattachment point. They also stated that heat transfer enhancement increases
substantially with increasing Prandtl number. Therefore, the mechanism no. (6) plays an
important role in heat transfer enhancements.
References
1. E. Bergles, Handbook of Heat Transfer, McGraw-Hill, New York, NY, USA, 3rd edition,
1998.
2. E. Bergles, “The implications and challenges of enhanced heat tranfer for the chemical
process industries,” Chemical Engineering Research and Design, vol. 79, no. 4, pp.
437–444, 2001. View at Publisher · View at Google Scholar · View at Scopus
3. E. I. Nesis, A. F. Shatalov, and N. P. Karmatskii, “Dependence of the heat transfer
coefficient on the vibration amplitude and frequency of a vertical thin heater,” Journal
of Engineering Physics and Thermophysics, vol. 67, no. 1-2, pp. 696–698, 1994. View at
Publisher · View at Google Scholar · View at Scopus
4. J. K. Hagge and G. H. Junkhan, “Experimental study of a method of mechanical
augmentation of convective heat transfer in air,” Tech. Rep. HTL3, ISU-ERI-Ames-
74158, Iowa State University, Amsterdam, The Netherlands, 1975. View at Google
Scholar
5. H. M. Li, K. S. Ye, Y. K. Tan, and S. J. Deng, “Investigation on tube-side flow
visualization, friction factors and heat transfer characteristics of helical-ridging tubes,”
in Proceedings of the 7th International Heat Transfer Conference, vol. 3, pp. 75–80,
Munich, Germany, 1982. View at Scopus
6. J. A. Kohler and K. E. Staner, “High performance heat transfer surfaces,” in Handbook
of Applied Thermal Design, E. C. Guyer, Ed., pp. 7.37–7.49, McGraw-Hill, New York,
NY, USA, 1984. View at Google Scholar
7. B. Arman and T. J. Rabas, “Disruption share effects on the performance of enhanced
tubes with the separation and reattachment mechanism,” in Proceedings of the 28th
National Heat Transfer Conference and Exhibition, vol. 202, pp. 67–75, August 1992.
View at Scopus
INFLUENCE December 2015
Department of Mechanical Engineering Page | 90
23.23.23.23. Enhancement of vortex cooling capacity by reducing Enhancement of vortex cooling capacity by reducing Enhancement of vortex cooling capacity by reducing Enhancement of vortex cooling capacity by reducing
hot tubehot tubehot tubehot tube surface temperaturesurface temperaturesurface temperaturesurface temperature
By - S.V. Yadav
Introduction:
A vortex tube is a thermal device that can produce hot and cold streams simultaneously
using only a compressed gas. Air is commonly used as a working medium in the vortex tube;
hence, the vortex cooling is an environmentally friendly system. Other benefits of vortex cooling
are fast cooling, short time producing a low temperature stream, and no moving parts thus little
requirement for maintenance. The device is also called Ranque-Hilsch tubes and Ranque-Hilsch
vortex tubes, named after the people who invented and published significant research on the
vortex tube. Several studies presented the finding on the unique energy separation phenomena in
the vortex tube. Compressed gas at higher pressure than atmospheric enters the tube inlet
passing through radial nozzles into the swirl or vortex generation chamber where it creates a
high speed flow with thousands of rotations per second. Two different airstreams produced then
flow along the tube where a high temperature stream is found at outer region while a low
temperature stream is observed at the central region as shown in Fig.1. The cold and the hot
streams exit the main tube at the opposite end for the counter-current flow type but leave at the
same side in different end for the con-current flow type.
Fig.1 Counter Current Flow Type of Vortex Tube
The fluid flow characteristic in vortex tube can be study using computational
methods, which indicated the areas of high and low air temperatures. Well known applications
for the vortex cooling systems are spot cooling at thermal machinery, cutting tool, electronic
enclosure and vest cooling. In the mining where the working environment may be
uncomfortable and inconvenient to construct conventional cooling system, vortex cooling has
proved to be useful. It is clear that the improvement of vortex cooling efficiency is of benefit to
INFLUENCE December 2015
Department of Mechanical Engineering Page | 91
energy conservation. A lower cost and less complex system is offered in this paper to gain an
efficiency enhancement of the vortex cooling unit.
This study proposed an alternate method to enhance the vortex cooling capacity by
reducing hot tube surface temperature. The modifying the conventional vortex tube, counter
current flow type by installing a thermoelectric compartment at the hot tube section. Study
parameters are the cold fraction from 0 to 1 and inlet air pressure of 1.5 bar. The cooling
capacity and efficiency of the vortex cooling system with and without thermoelectric unit are
compared. It shows the efficiency improvement of the vortex cooling system when the
thermoelectric is employed.
Experimentation:
A test rig is designed and constructed in the laboratory where a compressed air outlet
is located. Ambient air is compressed by the compressor. A pressurized air flows into an air
storage tank then goes to the pressure regulator, to control an inlet air pressure, for the test
condition. The vortex tube, counter current flow type, is employed in the test for the baseline
system. The vortex tube is later modified and combined with thermoelectric module at the hot
tube section which aims to increase the system efficiency by reducing the temperature of hot
tube surface. The schematic diagram of the experimental setup is shown in Fig. 2.
Fig.2 Schematic Diagram of the Vortex Cooling Test Rig
Result and Discussion:
The normal vortex tube is tested at inlet air pressure of 1.5 bars and cold fraction, cold
air mass flow rate per total air mass flow rate, from 0 to 1. The temperature distribution along
the tube (at a constant cold fraction) indicates the thermal separation is occurring along the hot
tube section. An increasing vortex cooling capacity is observed as shown in Fig.3. Further
INFLUENCE December 2015
Department of Mechanical Engineering Page | 92
improvement can be achieved through the design and fabrication of the thermoelectric module
on the hot tube surface where heat flow over the thermoelectric module is consistent.
Fig.3 The cooling capacity of the vortex cooling system with, (QC,TE) and without
thermoelectric module (QC)
Conclusion:
The results show that the vortex cooling capacity and the vortex tube efficiency has increased
when the thermoelectric module is used to extract heat from the hot tube section of the vortex
tube. With the temperature difference across the thermoelectric module, the electrical power is
generated as a byproduct. The cooling capacity and efficiency of the vortex tube increase by 4.3
% and 9.6 % respectively
References:
1. Rattanongphisat W.; The Development of Ranque-Hilsch Vortex Tubes: Computational
Models. Industrial Technology Journal, LampangRajabhat University, 3(2), 2010, pp. 40-51
2. Rattanongphisat W., Riffat S.B., Gan G.; Thermal separation flow characteristic in a vortex
tube: CFD model. International Journal of Low Carbon Technologies, Issues 3/4, 2008.
3. Whalen S.A., Dykhuizen R.C.; Thermoelectric energy harvesting from diurnal heat flow in
the upper soil layer. Energy Conversion and Management, Vol. 64, 2012, pp. 397–402.
4. Arbuzov V, Dubnishchev Y, Lebedev A, Pravdina M, Yavorskii N;.Observation of large-scale
hydrodynamic structures in a vortex tube and the Ranque effect. Tech PhysLett, Vol.23,
No.12, 1997, pp.938-40.
INTERDISCIPLINARY
INFLUENCE December 2015
Department of Mechanical Engineering Page | 93
22224.4.4.4. PHOTOELASTICITYPHOTOELASTICITYPHOTOELASTICITYPHOTOELASTICITY
By – A. V. Patil
1 Introduction
Photoelasticity is a non-destructive, graphic stress-analysis technique based on an opt
mechanical property called birefringence, possessed by many transparent polymers.
Photoelasticity is an experimental technique for stress and strain analysis that is particularly
useful for members having complicated geometry, complicated loading conditions or both. For
such cases, analytical methods (that is, strictly mathematical methods) may be too bulky or
impossible, and analysis by an experimental approach may be more appropriate. While the
virtues of experimental solution of static, elastic, two-dimensional problems are now largely
dominated by analytical methods, problems involving three-dimensional geometry, multiple-
component assemblies, dynamic loading and inelastic material behavior are usually more
responsive to experimental analysis.
2 Two Dimensional Photoelasticity
The procedure for preparing two-dimensional models from pre-machined templates will
be described. Alternatively, specimens may be machined “from scratch,” in which case a
computer controlled milling machine is recommended.
3 Procedure of Two Dimensional Photoelasticity
3.1. Selecting the Material
Many polymers exhibit sufficient birefringence to be used as photoelastic specimen
material. However, such common polymers as polymethylmethacrylate (PMMA) and
polycarbonate may be either too brittle or too intolerant of localized straining. Homalite-100 has
long been a popular general purpose material, available in various thicknesses in large sheets of
optical quality. PSM-1is more recently introduced material that has excellent qualities, both for
machining and for fringe sensitivity. Another good material is epoxy, which may be cast
between plates of glass, but this procedure is rarely followed for two-dimensional work.
3.2. Making a Template
If more than 2 or 3 pieces of the same shape are to be made, it is suitable to machine a template
out of metal first. This template may then be used to fabricate multiple photoelastic specimens
having the same shape as that of the template.
3.3. Machining the Specimen
INFLUENCE December 2015
Department of Mechanical Engineering Page | 94
If the specimen is machined “from scratch,” care must be taken totake very light cuts with a
sharp milling cutter in order to avoid heating the specimen unduly along its finished edges. A
coolant, such as ethyl alcohol, kerosene, or water, should be used to minimize heating. If a
template is used, then a band saw with a sharp, narrow band saw blade is used to prepare the
shape of the specimen. A generous allowance of about 1/8 inch should be marked on the
specimen all around the template edge, since the blade will heat the material and cut the edge.
Then a router with a high-speed carbide router bit, preferably with fine multiple flutes, should be
used to fabricate the edge of the model.
Figure 1 Machining the specimen
A succession of two cantering pins the first having a diameter larger than that of the
router bit and the second one the same size should be used so that excess material can first be
removed quickly, and then in a very controlled manner, leaving the specimen with the same
dimensions as those of the template. The piece should always be forced into the cutting edge of
the bit, that is, from front to back if the piece is on the right side of the bit as in the figure 1. The
final router passes should be smooth and very light so as to avoid heating of the specimen edges.
3.4. Drilling the Specimen
If the specimen has holes, such as those used for load-application points using pins, then
these holes should be drilled carefully with a sharp bit with plenty of coolant, such as ethyl
alcohol, kerosene, water otherwise unwanted fringes will developaround the edge of the hole.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 95
Figure 2 Drilling the specimen
The specimen should be backed with a piece of similar material in order to avoid
chipping on the back side of the specimen as the drill breaks through. A series of 2 or 3 passes of
the drill bit through the specimen, with coolant added each time, will minimize heat-induced
fringes.
3.5. Viewing the Loaded Specimen
After the specimen is removed from the template and cleaned, it is ready for
loading. A Polariscope (to be described later) is needed for viewing the fringes induced by the
stresses. The elements of the polariscope must be arranged so as to allow light to propagate
normal to the plane of the specimen. If a loading frame is needed to place a load on the
specimen, then this frame must be placed between the first element and the last element of the
polariscope. Monochromatic light should be used for the sharpest fringes; however, the light
source does not need to be coherent, and the light may or may not be collimated as it passes
through the specimen.
3.6. Recording the Fringe Patterns
An ordinary camera or a video camera may be used to record the fringe patterns.
3.7. Calibrating the Material
The sensitivity of a photoelastic material is characterized by its material fringe constant ,
which relates the fringe order value N associated with a given fringe to the thickness h of the
specimen in the light-propagation direction and the difference between the principal stresses in
the plane normal to the light-propagation direction.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 96
1 − 2 = ×
ℎ
By means of an experiment using a model of simple geometry subjected to known
loading, the value of is determined. The disk in diametric compression is a common
calibration specimen.
3.8. Interpreting the Fringe Patterns
Two types of pattern can be obtained: isochromatic and isoclinic. These patterns are
related to the principal-stress differences and to the principal stress directions, respectively.
References
1. James W. Dally,WilliamF.Riley, “EXPERIMENTAL STRESS ANALYSIS”, International
Student, Edition 1978.
2. Faydor L. Litvin, Alfonso Fuentes, Kenichi Hayasaka, “Design, manufacture, stress
analysis, and experimental tests of low-noise high endurance spiral bevel gears”
,Mechanism and Machine Theory 41 (2006), Page 83–118
3. N.A.Rubayi And H.W.Tam, Three dimensional photo elastic study of stresses in Rack gears,
Journal of Experimental Mechanics, (1979).
4. C.Y.Lau And H.Teoh Three dimensional photoelastic analysis of Femur (1998)
5. Jan Cernosek, Three dimensional photoelasticity by stress freezing Journal of Experimental
Mechanics, (1980).
INFLUENCE December 2015
Department of Mechanical Engineering Page | 97
25.25.25.25. An An An An Integrated Approach for Reliability Analysis of Integrated Approach for Reliability Analysis of Integrated Approach for Reliability Analysis of Integrated Approach for Reliability Analysis of
Resilient SystemResilient SystemResilient SystemResilient System
By – R. B. Patil
1. Introduction:
In 1816, the word reliability was used by poet Robert Southey to his friend [1]. From
1816 to till date several revolutionizing social, cultural and technological developments have
occurred. These developments impart need and importance of reliability. Especially during
World War II, most of the organizations came to know the cost of failure and system downtime.
So research work has started to improve system performance. Reliability engineering became a
scientific discipline in mid-1956 [2]. The objective of this discipline was to increase system
(hardware) reliability. Till today, a lot of work has been carried out in this field. Still large
amount of work is required to be done in this discipline. Zio [2] discussed some important
problems in his paper, “Reliability Engineering: Old Problems and New Challenges”.
Reliability is defined as the probability that the component or system performs its
intended task for a given period of time when used under stated operating conditions.In practice,
we speak reliability, but we measure failures [3]. The life cycle cost (LCC) of the system is also
closely linked to its reliability and maintainability (R & M). To estimate the sustaining costs,
one needs reliability and maintainability engineering details to find when and how things fail
[4].
Modern systems have four basic elements: hardware, software, organizational and
human which affect system performance, reliability, maintainability and availability [2].As
already discussed, reliability engineering was originally developed to handle rationally the
failures of the components of the hardware. It was also considered that system is either
functioning or in the faulty state. But there are many systems such as manufacturing, production,
power generation, whose overall performance can settle at different levels (e.g. 100%, 80% of
the normal capacity) depending on operating conditions of their constitutive multi-state
elements. Modeling of the multi-state system (MSS) stochastic behavior having large number of
components is also difficult[2, 5]. Further developments are certainly needed.
Software reliability is defined as the probability of failure free software operation for a
specified period of time in a specified environment.Recently most of the mechanical systems are
integrated with electronic devices and software. Presently software reliability is used in many
applications.It is required to enlarge the scope of software reliability assessment (realistic
approach such as fault detection and corrections) and need to integrate with hardware reliability.
Department of Mechanical Engineering
In last few decades it is found that organizational and human factors play a significant
role in risk of the system failures and accidents, throughout the life cycle of a system. This is
due to the fact that the reliability of the hardware components utilized in techn
has significantly improved in recent years. So it is required to consider organizational and
human factors in reliability and risk analysis, especially in case of more complicated system [6].
In this context, systems should not only be re
probability, but also resilient i.e. with able to recover from disruptions of the normal operating
conditions. In this regard, a new field of research in resilience engineering is emerging[7].Hence
it is also required to search the causal links between the system elements (hardware, software,
organizational & human) and modeling and integrating their behavior so as to quantify that of
the system as a whole (figure 1).
2. Need for Further Research:
Hardware, software, organizational and human factors are ingredients of most of the
complex systems and they decides system reliability. Most of the researchers selected one or two
element and developed model for reliability analysis.
study of all four elements. Such type of work has not been done yet. Reliability modeling of
multi-state system is challenging one. Software reliability should be integrated with hardware
reliability. There is no benchmarking in HRA due to lack of availability of data. The health of
the industrial system is decided by the organization. So the basic approach and methodologies
should be developed for organizational reliability. Research is limited due to lack of d
unavailability of benchmarking. For organizational system analysis, different levels of
management in the organizational systems such as top, middle and at the manager level
activities will be considered.
Hardware
Human
INFLUENCE December 2015
Department of Mechanical Engineering
decades it is found that organizational and human factors play a significant
role in risk of the system failures and accidents, throughout the life cycle of a system. This is
due to the fact that the reliability of the hardware components utilized in techn
has significantly improved in recent years. So it is required to consider organizational and
human factors in reliability and risk analysis, especially in case of more complicated system [6].
In this context, systems should not only be reliable, i.e. with acceptably low failure
probability, but also resilient i.e. with able to recover from disruptions of the normal operating
conditions. In this regard, a new field of research in resilience engineering is emerging[7].Hence
ed to search the causal links between the system elements (hardware, software,
organizational & human) and modeling and integrating their behavior so as to quantify that of
the system as a whole (figure 1).
Figure 1- Integrated system reliability
for Further Research:
Hardware, software, organizational and human factors are ingredients of most of the
complex systems and they decides system reliability. Most of the researchers selected one or two
element and developed model for reliability analysis. Hence it is required to carry out integrated
study of all four elements. Such type of work has not been done yet. Reliability modeling of
state system is challenging one. Software reliability should be integrated with hardware
no benchmarking in HRA due to lack of availability of data. The health of
the industrial system is decided by the organization. So the basic approach and methodologies
should be developed for organizational reliability. Research is limited due to lack of d
unavailability of benchmarking. For organizational system analysis, different levels of
management in the organizational systems such as top, middle and at the manager level
activities will be considered.
Hardware Software
Human Organizational
System Reliability
INFLUENCE December 2015
Department of Mechanical Engineering Page | 98
decades it is found that organizational and human factors play a significant
role in risk of the system failures and accidents, throughout the life cycle of a system. This is
due to the fact that the reliability of the hardware components utilized in technological systems
has significantly improved in recent years. So it is required to consider organizational and
human factors in reliability and risk analysis, especially in case of more complicated system [6].
liable, i.e. with acceptably low failure
probability, but also resilient i.e. with able to recover from disruptions of the normal operating
conditions. In this regard, a new field of research in resilience engineering is emerging[7].Hence
ed to search the causal links between the system elements (hardware, software,
organizational & human) and modeling and integrating their behavior so as to quantify that of
Hardware, software, organizational and human factors are ingredients of most of the
complex systems and they decides system reliability. Most of the researchers selected one or two
Hence it is required to carry out integrated
study of all four elements. Such type of work has not been done yet. Reliability modeling of
state system is challenging one. Software reliability should be integrated with hardware
no benchmarking in HRA due to lack of availability of data. The health of
the industrial system is decided by the organization. So the basic approach and methodologies
should be developed for organizational reliability. Research is limited due to lack of data and
unavailability of benchmarking. For organizational system analysis, different levels of
management in the organizational systems such as top, middle and at the manager level
Software
Organizational
INFLUENCE December 2015
Department of Mechanical Engineering Page | 99
3. Bibliography:
[1] J. H. Saleh, K. Marais, ‘Highlights from the early (and pre-) history of reliability
engineering’, Reliability Engineering and System Safety 91 (2006) 249-256.
[2] E. Zio, ‘Reliability Engineering: Old problems and new challenges’, Reliability
Engineering and System Safety 94 (2009) 125-141.
[3] H. P. Barringer, ‘An Overview of Reliability Engineering Principles’, PennWell
Conferences and exhibitions, Houston, TX, January 29- February 2, 1996.
[4] H. P. Barringer, ‘A Life Cycle Cost Summary’, International Conference of Maintenance
Societies (ICOMS-2003), Australia, May 20-23, 2003.
[5] J. F. Castet, J. H. Saleh, ‘Beyond reliability, multi-state failure analysis of satellite
subsystems: A statistical approach’, Relia. Engg.and Safety System, 95 (2010) 311-322.
[6] S. French, ‘Human reliability analysis: A critique and review for managers’, Safety
Science, 49(2011) 753-763.
[7] R. Steen, T. Aven, ‘A risk perspective suitable for resilience engineering’, Safety Science,
49(2011) 292-297.
Department of Mechanical Engineering
26.26.26.26. How Japan Replaced Half Its Nuclear Capacity With How Japan Replaced Half Its Nuclear Capacity With How Japan Replaced Half Its Nuclear Capacity With How Japan Replaced Half Its Nuclear Capacity With
EfficiencyEfficiencyEfficiencyEfficiency
“Nuclear power is inherently enormously complicated, and that by itself is the strongest
argument for getting our energy from somewhere else. As we saw in Japan, the consequences of
mistakes with nuclear power are very great.”
After the Tohoku earthquake in March 2011, Japan was in a seemingly impossible
situation. A tremendous amount of convention
nuclear fleet, was unavailable, and the country faced the risk of power cuts during
summer consumption peaks.
But miraculously, or seemingly so, in just a few short weeks Japan managed to
avert the rolling power
Japanese have turned these emergency measures into lasting solutions.
So how'd they do it without forcing people back to the Stone Age? Japan overcame
this daunting task by tapping the cheapest
energy efficiency and conservation.
Much of the electricity savings were initially driven by a popular movement known
as "Setsuden" ("saving electricity"). This movement emerged to encourage people and
companies to conserve energy and prevent rolling power cuts. Simple measures such as
INFLUENCE December 2015
Department of Mechanical Engineering
How Japan Replaced Half Its Nuclear Capacity With How Japan Replaced Half Its Nuclear Capacity With How Japan Replaced Half Its Nuclear Capacity With How Japan Replaced Half Its Nuclear Capacity With
“Nuclear power is inherently enormously complicated, and that by itself is the strongest
energy from somewhere else. As we saw in Japan, the consequences of
mistakes with nuclear power are very great.” — Robert Giegengack
After the Tohoku earthquake in March 2011, Japan was in a seemingly impossible
situation. A tremendous amount of conventional generation capacity, including the entire
nuclear fleet, was unavailable, and the country faced the risk of power cuts during
summer consumption peaks.
But miraculously, or seemingly so, in just a few short weeks Japan managed to
cuts that many believed inevitable. Even more impressive, the
Japanese have turned these emergency measures into lasting solutions.
So how'd they do it without forcing people back to the Stone Age? Japan overcame
this daunting task by tapping the cheapest and most widely available source of energy:
energy efficiency and conservation.
Much of the electricity savings were initially driven by a popular movement known
as "Setsuden" ("saving electricity"). This movement emerged to encourage people and
es to conserve energy and prevent rolling power cuts. Simple measures such as
INFLUENCE December 2015
Department of Mechanical Engineering Page | 100
How Japan Replaced Half Its Nuclear Capacity With How Japan Replaced Half Its Nuclear Capacity With How Japan Replaced Half Its Nuclear Capacity With How Japan Replaced Half Its Nuclear Capacity With
By – P.M.Wadekar
“Nuclear power is inherently enormously complicated, and that by itself is the strongest
energy from somewhere else. As we saw in Japan, the consequences of
After the Tohoku earthquake in March 2011, Japan was in a seemingly impossible
al generation capacity, including the entire
nuclear fleet, was unavailable, and the country faced the risk of power cuts during
But miraculously, or seemingly so, in just a few short weeks Japan managed to
cuts that many believed inevitable. Even more impressive, the
Japanese have turned these emergency measures into lasting solutions.
So how'd they do it without forcing people back to the Stone Age? Japan overcame
and most widely available source of energy:
Much of the electricity savings were initially driven by a popular movement known
as "Setsuden" ("saving electricity"). This movement emerged to encourage people and
es to conserve energy and prevent rolling power cuts. Simple measures such as
INFLUENCE December 2015
Department of Mechanical Engineering Page | 101
increasing temperatures in homes and offices, "thinning" lighting by removing some of
the bulbs and tubes, shutting down big screens and cutting exterior lighting enabled Japan
to dramatically reduce power demand almost overnight (albeit at the cost of a small
amount of personal comfort).
In addition to these measures, the dress code in offices was eased to reduce the
need for AC, while commercial facilities were audited to identify potential savings.
More surprising is how far off pundits were about the impact. Some made dire predictions
about the need to replace the nuclear fleet with "cheap coal". A combination of
commonsense energy savings measures that began as temporary behavioural changes have
led to permanent efficiency gains. In the process, the Japanese people, and its business
community, proved the punditry wrong.
In contrast, coal power projects proposed in the wake of Fukushima are still sitting
on the drawing board. Now japanare planning to developed some renewable energy
sources in modern way.
Wind lenses
At the end of 2009, the worldwide capacity of wind power generators stood at
159.2 gigawatts, generating 340 TWh per annum (equivalent to about 2 percent of worldwide
electricity usage), according the World Wind Energy Association’s annual report. Much of the
potential increase in renewable energy around the world can come from windbut significant
investments will need to be made, including in offshore wind farms.
To cope with various social, meteorological and topographical situations, wind
technology has developed much over the years. Notable steps are the growth in the size of
rotors, allowing a higher volume of electricity to be generated; the installation of variable-speed
turbines with rotors capable of handling increases and decreases in wind speed, thus mitigating
power fluctuation and noise pollution; and construction of offshore floating turbines to harness
consistent and strong winds, some of which are now, at pilot stage, capable of producing 5.0
megawatts of electricity.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 102
Fig: Wind Lenses
At the Yokohama Exhibition, one of the most noteworthy advances in wind technology,
the wind lens, has already seen the light of day. The name derives from the lens of a magnifying
glass because, in the same way that a magnifying glass can intensify light from the sun, wind
lenses concentrate the flow of wind. The structure of the wind lens is relatively simple; a large
hoop, called a brimmed diffuser, intensifies wind blows to rotate the turbine located in the
centre.
Verification experiments show that wind lens turbines produce three times as much
electricity as those without a hoop. According to Professor Yuji Ohya from Kyushu University,
even a gentle breeze can accelerate the revolution of the turbines considerably. The 2.5 metre-
wide blades can, at with wind speed of 5 metres a second, can provide a sufficient amount of
electricity to power an average household.
Wind lenses, given their efficiency, can miniaturize the size of wind turbines and hence
reduce construction costs. They can also help improve safety, reduce noise pollution and
therefore make the technology more accessible in urban environments.
Solar cell technology
Engineers from Osaka Prefecture University’s Department of Applied Chemistry are
designing and developing new sensitizers that allow solar cells to absorb a wider range of
wavelengths of light. According to the inventors, this technology can increase the efficiency of
current solar cells by at least 15 percent.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 103
The Yokohama fair exhibited many examples of photo-voltaic technology, including a
‘smart house’ in which household essentials such as kitchen appliances, water heaters, air
conditioners and even cars are powered by solar electricity.
The potential CO2 reductions from smart homes that incorporate energy management
systems are about 56 percent.
Algae fuel distilled by nature
If the Yokohama exhibition is any indication, the generation of biofuels is one of the
main areas of research for many Japanese institutes.
The Central Research Institute of Electric Power Industry (CRIEPI) presented interesting
research on generating “green crude oil” from blue-green microalgae (bacteria that obtain their
energy through photosynthesis). CRIEPI has simplified the complex process used in existing
algae technologies by applying a particular dewatering substance to extract organic compounds
(the oily components) from high water-content microalgae.
The benefit of this process is that it avoids dehydration of the biomass, extraction of
crops and the use of toxic organic solvents, so another advantage of the distillation process is
that it has no adverse effects on the environment or the ozone layer.
This new manner of obtaining green crude from microalgae could be part of a mix of
sustainable second generation biofuels that help the world overcome the global warming and
energy crises.
References:
1. http://www.greentechmedia.com/articles/
2. http://knowledge.wharton.upenn.edu/article/renewable-energy-japan-post-fukushima
3. http://ourworld.unu.edu/en/japans-next-generation-of-renewable-energy
4. http://nautilus.org/napsnet/napsnet-special-reports/energy_burst_japan/
INFLUENCE December 2015
Department of Mechanical Engineering Page | 104
27.27.27.27. Study of dielectric fluiStudy of dielectric fluiStudy of dielectric fluiStudy of dielectric fluid flow in micro electro d flow in micro electro d flow in micro electro d flow in micro electro
discharge milling process using CFD methoddischarge milling process using CFD methoddischarge milling process using CFD methoddischarge milling process using CFD method
By - S.A.Mullya
Abstract:
The material removal phenomenon of sparking and melting in µEDM process occurs at
inter electrode gap (IEG) of dimension less than 50 microns. The behavior of fluid properties at
IEG will be useful to elaborate material removal and effect of tool rotation in µEDM process.
Based on previous reports it was observed that tool rotation is inherent part of spark micro
machining which directly influence the debris flushing and redeposition. For stable machining
performance, removal of debris from the gap is important. Dielectric flow plays an important
role in flushing away debris from the gap and cooling the electrode. This work investigates the
fluid flow in the inter electrode gap.
Keywords: µED-milling, µchannel, dielectric, fluid flow, CFD.
1. INTRODUCTION
Today’s world is approaching towards miniaturization. Mechanical and Electronic
products are in great demand having small size and light in weight. This necessitates fine
precision machining with advanced technology. During World War II physicists B.R. and N.I.
Lazarenko in Moscow started the development of Electrical Discharge Machining (EDM), using
controlled discharge conditions, for achieving precision machining. Since then, EDM
technology has developed rapidly and become indispensable in manufacturing applications such
as die and mould making, micro-machining, prototyping, etc. However the phenomenon
occurring at Inter Electrode Gap (IEG) of EDM is very complex attracting researchers all over
the world. Electrical discharge phenomena in EDM occur over a very short time period, in a
very narrow space of few microns filled with liquid dielectric, bubbles, debris thus making both
observation and theoretical analysis extremely difficult [1].
Micro Electro Discharge Milling is a new machining technology which is similar to
µEDM except that a cylindrical tool electrode is rotating to achieve the desired shape by
following a programmed path. The main advantage of µED-milling is that it avoids
manufacturing of complex tools required for achieving 3D profiles. High machining aspect ratio,
capability to machine any hard conductive material and low machining cost are the advantages
of µED-milling. Currently µED-milling is mostly used for the production of micro cavities with
high aspect ratio and tools such as micro molds for micro injection molding [2].
INFLUENCE December 2015
Department of Mechanical Engineering Page | 105
In order to understand the physics of the process, it is very important to understand
material removal (debris), crater formed and dielectric fluid flow with molten metal.
Accumulation of debris in discharge gaps usually causes a poor discharge, which not only
causes a low material removal rate but also severely damages the machined surface [3]. Material
removal occurs intermittently during or just after the discharge duration. Material removal
occurs while the generated bubble is expanding, whereas no debris particle is removed while the
bubble is contracting [4]. Due to re-solidification of the molten material, recast layer or white
layer is formed on the machined surface. The thickness of the recast layer depends upon
different parameters such as peak current, pulse on time, dielectric flushing. Presence of cracks
within the recast layer is due to the use of hydrocarbon based dielectric which is rich in carbon
content [5]. It is agreed that the physics of the EDM process is complex. Most of the researchers
used CFD method to simulate and study the EDM process.
Dielectric, the working fluid in µEDM plays an important role in material removal rate and
the properties of machined surface. The dielectric fluid serves various functions such as
insulation,ionization, cooling of electrodes and removal ofdebris particles. µEDM process can
be classified according to the type of dielectric fluid used. Die sink EDM generally operates
with hydrocarbon oil, while wire, micro EDM and fast hole drilling usually uses deionised
water. Pure kerosene, which is mostly used in µEDM, creates several problems, such as
degradation of dielectric properties, pollution of air, and adhesion of carbon particles on the
work surface. Deionized water can be used efficiently as an alternative to kerosene (hydrocarbon
oil).
2. PROBLEM FORMULATION
Top view of 2D model used for CFD analysis of dielectric fluid flow is shown in Fig. 1.
Tool electrode of diameter 500µm is rotating in clockwise direction and fed from right to left
direction. The constant inter electrode gap of 50 µm is maintained between tool and the work
piece. The width and length of the µchannel is 600 µm and 1300 µm respectively. In
conventional µEDM, electrodes are submerged under dielectric and additional dielectric for
circulation enters through nozzle. For simulation, inlet and outlet of fluid flow are considered on
either side of the wall as shown in Fig. 1. For ease of simulation wall (partition) of 200 µm is
located at the center to distinguish between inlet and outlet. The presence of wall does not affect
the flow pattern near the tool electrode. Rotating
INFLUENCE December 2015
Department of Mechanical Engineering Page | 106
Fig. 1.2D geometry with mesh.
tool electrode, which is solid domain was meshed with quadrilateral elements and the fluid
domain that is µchannel gap was meshed with triangular elements. As the dimensions are in
microns the entire domain was fine meshed to improve the accuracy of results. The fluid flow
simulation of 2D geometry was done by Finite Volume method. As the geometry considered is
small in size, Reynolds number of the fluid flow is less than 2000, so the fluid flow in the inter
electrode gap is laminar flow. Realizable K-Ɛ model with standard wall functions was used for
simulation. The main algorithm was SIMPLE (Semi – Implicit Method for Pressure Linked
Equations) for pressure-velocity coupling. The effect of gravity was neglected as in the actual
µED milling process; tool and work piece is submerged under dielectric fluid. The dielectric
fluid used for simulation was liquid kerosene. The CFD simulations are performed with
commercial ANSYS Fluent software. The governing differential equations are Navier–Stokes
equations of the flow physics solved numerically on a computational mesh.
3. CONCLUSION
Removal of debris particles from the inter electrode gap plays an important role in µED-
milling. Flushing of the debris particles from the gap is done by dielectric fluid. The size and
shape of debris, surface texture formed depends upon fluid velocity in the gap. In this study,
dielectric fluid flow pattern in the electrode gap is investigated by using CFD tool. The results of
the simulation were compared with the SEM images of the machined surface. The effect of
change in electrode gap, inlet velocity of fluid, change in dimension of µchannel and tool
electrode diameter on fluid behavior in the gap was investigated. The simulation results showed
that as the gap size decreases, average velocity of fluid in the gap increases. In µED-milling as
electrode is rotating at high speed the fluid in contact with the electrode will rotate at high speed
and the speed reduces across the gap towards the stationary work piece. The average velocity of
the fluid in the gap is not affected by the change in inlet velocity, but there is change in fluid
pattern which creates eddies. Eddies is acting as a stirrer in the process. It supplies fresh
dielectric in the gap and removes debris particles from the gap (fig 2). The non uniform re-
INFLUENCE December 2015
Department of Mechanical Engineering Page | 107
deposition is observed on the milled surface which is due to the rotation of electrode; variation
of fluid velocity in the gap and formation of eddies.
a)
b)
c)
Fig.2. Velocity path lines at different inlet conditions (a) 100 rpm, 0.1cm/s (b) 100 rpm,
0.01cm/s (c) 500 rpm, 0.001cm/s.
References
[1] M. Kunieda et al., “Advancing EDM through Fundamental Insight into the Process,” CIRP
Annals – Mfg. Tech. 54, (2005) 64 – 87.
[2] G. Karthikeyan et al., “Micro ED milling process performance: An experimental
investigation,” Int. J. of Mach. Tools & Manu. 50 (2010) 718–727.
[3] Jin Wang et al., “Debris and bubble movements during EDM,” Int. J. of Mach. Tools &
Manu.58 (2012) 11–18.
[4] Jin Wang et al., “Simulation model of debris and bubble movement in consecutive-pulse
discharge of EDM,” Int. J. of Mach. Tools &Manu.77(2014)56–65
[5] P. C. Tan et al., “Modeling of Recast Layer in micro EDM,” J. of Mfg. Sci. and Engg. 2010,
Vol.132/031001-1.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 108
28.28.28.28. Low cost automation Low cost automation Low cost automation Low cost automation ---- An insightAn insightAn insightAn insight
By - S.V.Patil
Competitiveness, primarily with high productivity and quality at low cost, is a major
manufacturing need, which can be fulfilled by low cost automation. The current financial crisis
faced all over the world has posed tremendous challenges on the manufacturing organizations.
Even at low volumes, and large variety, they have to be competitive with minimum investment.
Low cost automation can play an important role in this situation.
Huge corporations with tremendous financial strength, technical leadership and
multinational market can achieve quality even in severe competition. But, organizations in
developing countries with constraints on all the above, i.e. finance, technical leadership and
limited market and very low labor productivity have to achieve productivity and quality through
strategies workable under such conditions. There are many methods available to increase
competitiveness.
One of the very practical, safe, economical, rewarding methods is the application of low
cost automation (LCA) pursued in Japan with a lot of enthusiasm from around 1960s. In India
also LCA has been found to be useful for small enterprises employing a few people to huge
manufacturing organizations employing thousands of people. Big organizations like L&T,
Siemens, Mahindra & Mahindra, Bajaj Auto, etc, made a separate cell involved in in-house
development of LCA. However, these devices were useful only to replace the muscular effort of
the labor. Developments in microelectronics started during 1970s and have added considerable
power to LCA. Electronic sensing, data acquisition, data processing and sophisticated
controllers have given an excellent combination of high technology at low cost. This adds
considerable intelligence into the automation system. Prof. N Ramakrishnan Mechanical
Engineering Department IIT Mumbai coined the terminology ‘Low cost Hi-tech automation’
(LCHA) in 1994 to highlight the significance of the combination.
What is Low Cost Automation?
Referring training manual prepared by H.S. Dwarkanath Formerly Chief Consultant,
National Productivity Council of India, Low Cost Automation (popularly known as LCA), is the
introduction of simple pneumatic, hydraulic, mechanical and electrical devices into the existing
production machinery, with a view to improving their productivity. This will involve the use of
standardized parts and devices to mechanize or automate machines, processes and systems.
Low - cost high - tech automation
INFLUENCE December 2015
Department of Mechanical Engineering Page | 109
Low cost automation coupled with sensors, microcontrollers and data storage/processing
facility with considerable capability enhancement is LCHA.
Listing advantages
The following advantages have made LCHA very popular.
Financial constraint does not hinder LCHA. Capital equipment is very expensive and has long
payback period. On the other hand, it is built around existing resources. Hence the investments
required are lower and payback period is short. Going in a progressive way, the same money can
be used again and again when the payback period is shorter labor productivity can be increased.
This will reduce the percentage of labor cost in the total cost. Smaller batch size also becomes
viable for LCHA. Expensive automation will need sufficiently large batch size to be cost
effective Rising raw materials cost necessitates better utilization of the material, less W.I.P. and
less rejection. LCHA can help in these as below: In-house development: since the people
involved in the activity are encouraged to participate in the development, the people develop
skill to maintain, and even repair them. This reduces the breakdown cost and time. Risk is less:
One goes step by step approach. Hence the risk due to heavy investment or selection of wrong
technology, etc, is minimized.
However, LCHA has also some limitations
Development takes time: When you buy a readymade automatic machine, time involved
in developing, troubleshooting, modification, etc, is avoided. LCHA, being custom built and
generally in-house, takes more time. Learning time: Technology like CNC needs very little time
for changing from one drawing to another, since the skill is inbuilt into the programme. LCHA
uses manual participation and hence will take some time to learn whenever change is
incorporated.
It is needless to add that the advantages far outweigh the limitations. In Japan and what are
called Asian Tigers, this has been widely used because this is the most attractive proposition for
small entrepreneurs.
Fields of application
It is very interesting to state that any manufacturing activity is a potential candidate for
LCHA. To clarify this point further:
All activities related to discrete manufacturing: Irrespective of the product, all activities like
loadings feeding, clamping, machinery welding, forming, gauging, assembly, packing, etc.
Process industries: Chemicals, oils, powders, pharmaceuticals, etc. Mining: very useful for
mining operations. Manufacture of explosives: volatile products like LPG, etc
INFLUENCE December 2015
Department of Mechanical Engineering Page | 110
The advantage of pneumatics, which is widely used in LCHA (namely the fact that air is a very
medium against fire), makes LCHA very attractive for the following:
Printing & Packing: LCHA widely used. Agriculture: tilling, sawing, and plucking. Stock
breeding: Controlled mixing and distribution of food, collecting milk, eggs, cleaning the cages,
etc, can be done with LCHA. Food processing: this is a newly developing area with considerable
growth potential. It is obvious from above that any area of productive activity has potential for
application of LCHA.
LCHA and India
India has a tremendous advantage over the developed countries. To begin with LCHA is
custom built. Design and development cost cannot be recovered by large volume production.
Compared to the developed nations, the cost of engineering personnel is significantly lower than
developed countries. Hence, in India we can make it much cheaper than many other nations
custom built automation with LCHA can take care of the production volume consideration. This
can avoid under automation or over automation. Micro electronics and sensors add reliability,
quality and appropriate intelligence to the machine. Hence the cost to benefit ratio will be very
high.
It can be reasonably stated that a sophisticated automatic system can be developed within 25 to
35 per cent of the cost incurred in developed nations. There is no doubt that India will be a
major power in global manufacturing. LCHA is a very effective, competitive and cost effective
tool for the manufacturing industries irrespective of what product they make and the production
volume.
In conclusion
When the manufacturing industry in India was excited about the economic boom and the
expansion of almost all manufacturing units to meet the increasing demand, it appeared that we
are ready to be a global manufacturing hub. The worldwide recession, slump in demand for
products and the precarious condition of the automobile industry have created a sense of
confusion, gloom and uncertainty. It feels that understanding and application of LCHA will be
one of the strategies that may help us. This recession period can be a good time to get into it.
References
[1] NPC Training Manual
[2] A&D India December 2008/January 2009, p. 30
INFLUENCE December 2015
Department of Mechanical Engineering Page | 111
29.29.29.29. ASTROSAT ASTROSAT ASTROSAT ASTROSAT ---- INDIA’S FIRST MULTIWAVELENGTH INDIA’S FIRST MULTIWAVELENGTH INDIA’S FIRST MULTIWAVELENGTH INDIA’S FIRST MULTIWAVELENGTH
ASTRONOMY SATELLITEASTRONOMY SATELLITEASTRONOMY SATELLITEASTRONOMY SATELLITE
By K.J.Burle
ASTROSAT is India’s first dedicated multi wavelength space observatory. This
scientific satellite mission endeavours for a more detailed understanding of our universe. One of
the unique features of ASTROSAT mission is that it enables the simultaneous multi-wavelength
observations of various astronomical objects with a single satellite. ASTROSAT will observe
universe in the optical, Ultraviolet, low and high energy X-ray regions of the electromagnetic
spectrum, whereas most other scientific satellites are capable of observing a narrow range of
wavelength band. Multi-wavelength observations of ASTROSAT can be further extended with
co-ordinated observations using other spacecraft and ground based observations. All major
astronomy Institutions and some Universities in India will participate in these observations.
ASTROSAT has a lift-off mass of about 1513 kg. It is launched into a 650 km orbit
inclined at an angle of 6 deg to the equator by PSLV-C30. After injection into Orbit, the two
solar panels of ASTROSAT are automatically deployed in quick succession. The spacecraft
control centre at Mission Operations Complex (MOX) of ISRO Telemetry, Tracking and
Command Network (ISTRAC) at Bangalore will manage the satellite during its mission life. The
science data gathered by five payloads of ASTROSAT are telemetered to the ground station at
MOX. The data is then processed, archived and distributed by Indian Space Science Data Centre
(ISSDC) located at Byalalu, near Bangalore.
The scientific objectives of ASTROSAT mission are:
o To understand high energy processes in binary star systems containing neutron stars and
black holes
o Estimate magnetic fields of neutron stars
o Study star birth regions, high energy processes in star systems lying beyond our galaxy
o Detect new briefly bright X-ray sources in the sky
o Perform a limited deep field survey of the Universe in the Ultraviolet region
These primary science objectives are being met with 5 science payloads.
1) Large Area X-ray Proportional Counters (LAXPC)
INFLUENCE December 2015
Department of Mechanical Engineering Page | 112
2) Cadmium-Zinc-Telluride Imager (CZTI)
3) Soft X-ray imaging Telescope (SXT)
4) Scanning Sky Monitor (SSM)
5) Ultra Violet Imaging Telescope (UVIT)
In its thirty first flight (PSLV-C30) conducted on September 28, 2015, India's Polar
Satellite Launch Vehicle successfully launched ASTROSAT, the country's Multi Wavelength
Space Observatory along with six foreign customer satellites into a 644.6 X 651.5 km orbit
inclined at an angle of 6 deg to the equator. The achieved orbit is very close to the intended
one. This was the thirtieth consecutive success for PSLV.
References:
[1] “ASTROSAT Handbook” Ver 1.6 Dec 18 2014, Indian Space Research Organisation.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 113
[2] www.isro.gov.in/update/28-sep-2015/pslv-successfully-launches-dia%E2%80%99s-multi-
wavelength-space-observatory-astrosat..
[3] www.isro.gov.in/sites/default/files/PSLV-C30Brochure.compressed.pdf
[4] www.astrosat.iucaa.in/?q=node/11
INFLUENCE December 2015
Department of Mechanical Engineering Page | 114
30.30.30.30. Bullet TrainBullet TrainBullet TrainBullet Train
By - M.M.Salgar
The Shinkasen is a network of high-speed railway lines in Japan operated by four Japan
Railways Group companies. Starting with the Shinkansen (515.4 km) in 1964, the network has
expanded to currently consist of 2,615.7 km of lines with maximum speeds of 240–320 km/h
(150–200 mph), 283.5 km of Mini-shinkansen lines with a maximum speed of 130 km/h, and
10.3 kmof spur lines with Shinkansen services. The network presently links most major cities on
the islands of Honshu and Kyushu, with services along the newly constructed extension to the
northern island of Hokkaido scheduled to commence in March 2016. The nickname bullet
train is sometimes used in English for these high-speed trains.
The maximum operating speed is 320 km/h. Test runs have reached 443 km/h for
conventional rail in 1996, and up to a world record 603 km/h (375 mph) for maglev trains in
April 2015.
Shinkansen literally means new trunk line, referring to the high-speed rail line network.
The name Superexpress, initially used for Hikari trains, was retired in 1972 but is still used in
English-language announcements and signage.
The original Shinkansen, connecting the largest cities of Tokyo and Osaka, is the world's
busiest high-speed rail line. Carrying 151 million passengers per year (March 2008), and at over
5 billion total passengers it has transported more passengers than any other high-speed line in
the world. The service on the line operates much larger trains and at higher frequency than most
other high speed lines in the world. At peak times, the line carries up to thirteen trains per hour
in each direction with sixteen cars each (1,323-seat capacity and occasionally additional
standing passengers) with a minimum headway of three minutes between trains.
Though largely a long-distance transport system, the Shinkansen also serves commuters
who travel to work in metropolitan areas from outlying cities one or two stops removed from the
main cities, and there are some services dedicated to this market.
Japan's Shinkansen network had the highest annual passenger ridership (a maximum of 353
million in 2007) of any high-speed rail network until 2011, when China's high-speed rail
network surpassed it at 370 million passengers annually, though the total cumulative passengers,
at over 10 billion, is still larger.[7]
While the network has been expanding this additional
ridership is expected to be overtaken by Japan's declining population causing ridership to decline
over time. The recent expansion in tourism has also boosted ridership marginally.
INFLUENCE December 2015
Department of Mechanical Engineering Page | 115
References
1. "About the Shinkansen Outline". JR Central. March 2010. Retrieved 2 May 2011.
2. "JR-EAST:Fact Sheet Service Areas and Business Contents" (PDF). East Japan Railway
Company. Retrieved 30 April 2011.
top related