caps ton on wind turbine

Upload: nishant395

Post on 03-Jun-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/12/2019 Caps Ton on wind turbine

    1/29

    CAPSTONE PROJECT (PARTI) REPORT(Project Term August-December, 2012)

    EMBEDDING SOUND EFFECTS IN MATLAB

    WITH THE USE OF NEURAL NETWORKS

    Submitted by

    Name of Student1: SWATI MISHRA

    Registration Number: 10906494

    Name of Student2: PARAS BISHT

    Registration Number: 10904454

    Name of Student3: PALLAVI

    Registration Number: 10903538

    Name of Student4: NEEMANI GUPTA

    Registration Number: 10903879

    Project Group Number: KG048

    Under the Guidance of

    Miss MADHU BALA, Lecturer

    Discipline of Computer Science and Information Technology

    Lovely Professional University, Phagwara

    August to December, 2012

  • 8/12/2019 Caps Ton on wind turbine

    2/29

    DECLARATION

    We hereby declare that the project work entitled Embedding Sound Effects inMatlab is an

    authentic record of our own work carried out as requirements of Capstone Project (Part-I) for theaward of degree of B.Tech in Computer Science(CSE) from Lovely Professional University,

    Phagwara, under the guidance ofMadhu Bala, during August to December, 2012).

    Name : Madhu

    U.ID : 13878

    Designation Lecturer

    Signature of Faculty Mentor

    Name of Student 1: SWATI MISHRA

    Registration Number: 10906494

    Name of Student 2: PARAS BISHT

    Registration Number: 10904454

    Name of Student 3: PALLAVI

    Registration Number: 10903538

    Name of Student 4: NEEMANI GUPTA

    Registration Number: 10903879

  • 8/12/2019 Caps Ton on wind turbine

    3/29

    CERTIFICATE

    This is to certify that the declaration statement made by this group of students is correct to the

    best of my knowledge and belief. The Capstone Project Proposal based on the technology / toollearnt is fit for the submission and partial fulfillment of the conditions for the award of B.Tech in

    Computer Science (CSE) from Lovely Professional University, Phagwara.Name : Madhu.

    U.ID : 13878

    Designation : Lectirer

    Signature of Faculty Mentor

  • 8/12/2019 Caps Ton on wind turbine

    4/29

    INDEX

    Profile of the Problem

    Introduction 1

    Literature Review 4

    Technique used

    Use Of Neural Network 8

    Functions of neural network 9

    Objective of Study 12

    Design System Flow Chart 13

    Complete Work Done 14

    Experimental Work Done 15

    Interface 17

    Expected Outcome Of Project 22

    Conclusion 24

    References 25

  • 8/12/2019 Caps Ton on wind turbine

    5/29

    PROFILE OF THE PROBLEM

    INTRODUCTION

    Our project topic consists ofadding sound of different emotions using matlab. Firstly we have

    studied about the different sound frequency of human at different emotions such as in sadness,

    happiness, anger etc. Communication is an important capability, not only in linguistic terms but

    also in emotional term. . When the recognition is based solely on voice, which is a fundamental

    mode of human communication.

    This report is organized as follows: In this section , we focus on the emotional state and the

    emotional group in order to explain the rationale of classification from the viewpoint of

    psychology. We also propose a new feature for emotion recognition called the frequency range

    of meaningful signal (FRMS). we compare the proposed feature with other existing features,such as energy and pitch.

    Emotional state

    In 2000, Hiroyasu Miwa defined the emotional space as having three levels: the activation level,

    the pleasant level, and the certainty level. Using the same approach, we can map emotions in the

    emotional space having infinite dimensions and infinite indices. Psychology of emotion is can becategorized in terms of physiological, behavioral and cognitive psychology. Each theory explains

    how humans feel and recognize emotions. According to these theories, humans recognize their

    own emotions by changes in physiology or behavior.

    Fig-1- Various emotions on pleasant-activity scale

    1

  • 8/12/2019 Caps Ton on wind turbine

    6/29

    2

    Although the same changes may occur, humans experience other emotions that stem from othercognitions or thoughts for given a situation. By the same approach, infinite dimensions of

    emotion can be categorized into three groups, and we call each component of these categories anemotional state. Behavioral and cognitive indices are not, however, recognized by using voice orfacial expressions. Hence, the emotional space for emotion recognition has only physiological

    indices. Of the physiological indices, we propose the activity and the pleasant as the

    emotional states for emotion recognition. In Hiroyasu Miwas model of emotional space, weremoved a certain index because it is in the cognition category thereby necessitating artificial

    intelligence for recognition emotion. With this model of emotional space, an infinite variety of

    dimensions can be compressed into two dimensions. In addition, an infinite numbers of emotions

    can be classified into four emotion groups namely joy, sadness, anger, and neutrality. Forexample, Hiroyasu Miwa defines six primary emotions into groups of joy (happiness), sadness

    (disgust, sadness), anger (anger, fear), and neutrality (neutrality). Hence, we dont recognize

    each primary emotion but rather each group of emotions that has the same emotional state.

    Fig-2- Various emotions on frequency scale

    FREQUENCY RANGE OF MEANINGFUL SIGNAL

    What is the frequency range of meaningful signal?

    In general, human speech has a long frequency range. However, the important frequency range

    or meaningful frequency range is from 100 Hz to 5000 Hz . We have Original speech signal forfour emotions.

  • 8/12/2019 Caps Ton on wind turbine

    7/29

    3

    Fig-3: Orthogonal speech signal for neutrality Fig-4: Orthogonal speech signal for joy

    Fig-5: Orthogonal speech signal for saddness Fig-6: Orthogonal speech signal for anger

  • 8/12/2019 Caps Ton on wind turbine

    8/29

    4

    LITERATURE REVIEW

    MATLAB

    MATLAB (matrix laboratory) is a numerical computing environment and fourth-generationprogramming language. Developed by MathWorks, MATLAB allows matrix manipulations,

    plotting of functions and data, implementation of algorithms, creation of user interfaces, and

    interfacing with programs written in other languages, including C, C++, Java, and Fortran.

    Although MATLAB is intended primarily for numerical computing, an optional toolbox uses the

    MuPAD symbolic engine, allowing access to symbolic computing capabilities. An additionalpackage, Simulink, adds graphical multi-domain simulation and Model-Based Design for

    dynamic and embedded systems.

    In 2004, MATLAB had around one million users across industry and academia. MATLAB users

    come from various backgrounds of engineering, science, and economics. MATLAB is widely

    used in academic and research institutions as well as industrial enterprises.

    Syntax

    The MATLAB application is built around the MATLAB language, and most use of MATLABinvolves typing MATLAB code into the Command Window (as an interactive mathematical

    shell), or executing text files containing MATLAB code and functions.

    FREQUENCY OF SOUND OF MALE AND FEMALE AT DIFFERENTEMOTIONS

    Table-1: Frequency of men

    RECOG NEURALITY JOY SADNESS ANGER

    NEUTRALITY 80.2 4.8 10.5 4.6

    JOY 1.7 80.5 5.1 13.1

    SADNESS 8.4 8.0 80.6 3.1

    ANGER 2.7 20.8 0.2 76.4

    OVERALL 79.3%

  • 8/12/2019 Caps Ton on wind turbine

    9/29

    5

    Table-2: Frequency of women

    RECOG NEUTRALITY JOY SADNESS ANGER

    NEUTRALITY 69.1 5.9 21.1 4.8

    JOY 8.1 61.0 4.8 25.9

    SADNESS 8.8 5.9 84.4 0.8

    ANGER 3.4 19.6 3.4 73.8

    OVERALL 72%

    Recognizing Two Emotional States:

    Table-3: Output/input of hot anger and neutral

    Output

    Input

    Hot anger Neutral

    Hot anger 128 12

    Neutral 8 72

    In the experiment, we attempted to distinguish two emotional types. We used all 62 features and

    a three-layer neural network with 20 nodes in the hidden layer to distinguish hot anger from

    neutral, which is considered as the easiest classification task. The testing result is shown inTable 2. 140 utterances labelled as hot angerand 80utterances labelled as neutral were tested.

    128hot angerutterances and 72neutral utterances are classified correctly..

    From the results of each test (Figure 2), we can see the classification performance is stable, and

    the average accuracy is 90.91%.

    In comparison to neutral speech, anger is produced with a lower pitch, higher intensity, more

    energy (500 Hz) across the vocalization, higher first formant (first sound produced) and fasterattack times at voice onset (the start of speech). "Hot anger", in contrast, is produced with a

    higher, more varied pitch, and even greater energy (2000 Hz)

  • 8/12/2019 Caps Ton on wind turbine

    10/29

    6

    Fig-7: Graph of hot anger and neutral emotion

    Aside from the recognition rate, the key advantages of the FRMS are that it is not dependent onthe magnitude of the speech signal and it is robust in noisy environments. In practice, when the

    magnitude of a speech signal changes due to the magnitude of a speakers voice, the distancebetween the speaker and the microphone or the characteristics of microphone, the recognition

    performance deteriorates. Hence, compensation for this deterioration is a significant study theme.

    However, the FRMS is concerned only with the envelope of the signal and not the magnitude.Further, the FRMS focuses on the relationship between the original speech signal and the low-

    pass filtered speech signal. As such, the FRMS is independent of the distance or the magnitude

    of the voice, thereby making it powerful in practical use. To verify this independence, we

    performed recognition experiments using the same training data as the original magnitude but

    with half and double magnitude test data. As anticipated, recognition experiments of original,

    half and double magnitude speech data yielded almost the same results, as shown in Fig.

    6.Furthermore, the results confirm that the FRMS feature has the advantage of being able to be

    used in practice without any pre-processing for distance compensation.

    Another advantage of the FRMS feature is its robustness in noisy environments. Because most

    noises have a frequency higher than the cut-off frequency, high frequency noises tend todisappear after low-pass filtering. In addition, if noises are weaker than the main voice, then

    most of these noises will not affect the envelope of the main voice. Hence, the FRMS can be

    robust in a noisy environment. To verify this robustness, we performed recognition experiments

    on voice noise and white noise speech data using the same training data as previously used.

    The voice noise data consisted of the main voice and two other set of emotional voice data from

    a different speaker with approximately half of the magnitude. The white noise

  • 8/12/2019 Caps Ton on wind turbine

    11/29

    7

    data consisted of the main voice and white noise at about a quarter of the magnitude of the main

    voice. To compare the advantages of the FRMS feature withother features (energy and pitch),

    and to verify our half anddouble magnitude data, as well as the voice noise data.

    Before starting the project we first collected all information regarding the frequencies of

    different voices. First we studied different sound functions in Android and Windows.

    ANDROID

    Android provides two main API's for playing sounds. The first one through the SoundPool classand the other one through the MediaPlayer class.

    SoundPool can be used for small audio clips. It can repeat sounds and play several sounds

    simultaneously. The sound files played with SoundPool should not exceed 1 MB. SoundPool

    does load the file asynchronously. As of Android API8 it is possible to check if the loading iscomplete via a OnLoadCompleteListener.

    Android supports different audio streams for different purposes. The phone volume button can beconfigured to control a specific audio stream, e.g. during a call the volume button allow increase

    decrease the caller volume. To set the button to control the sound media stream set the audio type

    in your application.

    The android.media.MediaRecorder class can be used to record audio and video. To use

    MediaRecorder you need to set the source device and the format.

    WINDOWS

    Following a trend set by Apple and Google, Microsoft has turned its formerly-device-gated appstore into something you can check out from any web browser (and link to from publications

    that write about apps).

    The web welcomes a new app store today: Microsoft Windows Phone Marketplace. Scoff if you

    must, but until youve tried the Windows Phone 7, with all due respect, you might not know

    what youre talking about. Its a beautifully-designed, responsive platform, and it could make a

    real dent now that Nokias on board.

    None other than the head of Verizon thought Windows Phone 7 would beat Blackberry to

    become the third ecosystem behind iOS and Android, according to an erroneous

    InformationWeek article, since updated, which was believable enough to have been picked up byseveral other publications. Hey, it could happen. Againhave you used it?

    Windows Phone Marketplace offers over 30,000 free and paid apps, as noted by Wired.com, in

    16 categories with all the usual stuff: icon, price, a rating, description, screenshots and user

    reviews. Weve already checked it out on the Windows Zune software and on a Windows Phone

    7. Now that its finally available on the web, we decided to do a little legwork to figure outwhere things stand music-wise on this, day one of the web-based Windows Phone Marketplace.

  • 8/12/2019 Caps Ton on wind turbine

    12/29

    8

    USE OF NEURAL NETWORKS IN MATLAB

    The neural networks is a way to model any input to output relations based on some input output

    data when nothing is known about the model. This example shows you a very simple exampleand its modelling through neural network using MATLAB.

    Neural Network Toolbox provides functions and apps for modeling complex nonlinear

    systems that are not easily modeled with a closed-form equation. Neural Network Toolbox

    supports supervised learning with feedforward, radial basis, and dynamic networks. It also

    supports unsupervised learning with self-organizing maps and competitive layers. With thetoolbox you can design, train, visualize, and simulate neural networks. You can use Neural

    Network Toolbox for applications such as data fitting, pattern recognition, clustering, time-series

    prediction, and dynamic system modeling and control.

    To speed up training and handle large data sets, you can distribute computations and data acrossmulticore processors, GPUs, and computer clusters using Parallel Computing Toolbox.

    Characteristics of Neural Networks

    They exhibit some brain-like behaviors that are difficult to program directly like:

    Learning association Categorization generalization feature extraction optimization noise

    immunity.

    There is a wide range of neural network architectures:

    Multi-Layer Perceptron (Back-Prop Nets) 1974-85

    Neocognitron 1978-84Adaptive Resonance Theory (ART) 1976-86

    Sef-Organizing Map 1982Hopfield 1982

    Bi-directional Associative Memory 1985

    Boltzmann/Cauchy Machine 1985Counterpropagation 1986

    Radial Basis Function 1988

    Probabilistic Neural Network 1988

    General Regression Neural Network 1991Support Vector Machine 1995

  • 8/12/2019 Caps Ton on wind turbine

    13/29

    9

    NEURAL NETWORKS FUNCTION

    Graphical interface functions

    nctool- Neural network classification or clustering tool. It opens the neural network

    clustering GUI.

    nftool- Neural network fitting tool. It opens the neural network fitting tool GUI.

    nntool- Open Data/Network Manager. It opens the network/data manager window,

    which allows us to import, create, use and export neural network and data.

    Cascadeforwardnet-

    cascadeforwardnet(hiddenSizes,trainFcn)

    Description

    Cascade-forward networks are similar to feed-forward networks, but include a connection from

    the input and every previous layer to following layers.

    As with feed-forward networks, a two-or more layer cascade-network can learn any finite input-

    output relationship arbitrarily well given enough hidden neurons.

    cascadeforwardnet(hiddenSizes,trainFcn) takes these arguments,

    hiddenSizes Row vector of one or more hidden layer sizes (default = 10)

    trainFcn Training function (default = 'trainlm')

    and returns a new cascade-forward neural network.

    elmannet-

    Elman neural network

    Syntax

    elmannet(layerdelays,hiddenSizes,trainFcn)

  • 8/12/2019 Caps Ton on wind turbine

    14/29

    10

    Description

    Elman networks are feedforward networks (feedforwardnet) with the addition of layer recurrent

    connections with tap delays.

    With the availability of full dynamic derivative calculations (fpderiv and bttderiv), the Elmannetwork is no longer recommended except for historical and research purposes. For more

    accurate learning try time delay (timedelaynet), layer recurrent (layrecnet), NARX (narxnet), andNAR (narnet) neural networks.

    Elman networks with one or more hidden layers can learn any dynamic input-output relationshiparbitrarily well, given enough neurons in the hidden layers. However, Elman networks use

    simplified derivative calculations (using staticderiv, which ignores delayed connections) at the

    expense of less reliable learning.

    elmannet(layerdelays,hiddenSizes,trainFcn) takes these arguments,

    Layerdelays Row vector of increasing 0 or positive delays (default = 1:2)

    HiddenSizes Row vector of one or more hidden layer sizes (default = 10)

    TrainFcn Training function (default = 'trainlm')

    and returns an Elman neural network.

    feedforwardnet

    Feedforward neural network

    Syntax

    feedforwardnet(hiddenSizes,trainFcn)

    Description

    Feedforward networks consist of a series of layers. The first layer has a connection from the

    network input. Each subsequent layer has a connection from the previous layer. The final layer

    produces the network's output.

    Feedforward networks can be used for any kind of input to output mapping. A feedforwardnetwork with one hidden layer and enough neurons in the hidden layers, can fit any finite input-

    output mapping problem.

  • 8/12/2019 Caps Ton on wind turbine

    15/29

    11

    Specialized versions of the feedforward network include fitting (fitnet) and pattern recognition

    (patternnet) networks. A variation on the feedforward network is the cascade forward network(cascadeforwardnet) which has additional connections from the input to every layer, and from each

    layer to all following layers.

    feedforwardnet(hiddenSizes,trainFcn) takes these arguments,

    hiddenSizes Row vector of one or more hidden layer sizes (default = 10)

    trainFcn Training function (default = 'trainlm')

    and returns a feedforward neural network.

    catelements

    Concatenate neural network data elements

    Syntax

    catelements(x1,x2,...,xn)

    [x1; x2; ... xn]

    Description

    catelements(x1,x2,...,xn) takes any number of neural network data values, and merges themalong the element dimension (i.e., the matrix row dimension).

    If all arguments are matrices, this operation is the same as [x1; x2; ... xn].

    If any argument is a cell array, then all non-cell array arguments are enclosed in cell arrays, and

    then the matrices in the same positions in each argument are concatenated.

    view- View neural network

    Syntax

    view(net)

    Description

    view(net) launches a window that shows your neural network (specified in net) as a graphical

    diagram.

  • 8/12/2019 Caps Ton on wind turbine

    16/29

    12

    OBJECTIVE OF STUDY

    Neural networks better than computer for processing of sensorial data such as signal

    processing, image processing, pattern recognition, robot control, non-linear modeling andprediction

    Survey of attractive applications of artificial neural networks.

    Practical approach for using artificial neural networks in various technical, organizationaland economic applications

    Analogy with biologic neural networks is too weak to convince engineers and computer

    scientists about correctness.

    Correctness follows from mathematical analysis of non-linear functions or dynamical

    systems and computer simulations.

    Neural networks are realistic alternatives for information problems (in stead of tedioussoftware development)

    Not magic, but design is based on solid mathematical methods

    Neural networks are interesting whenever examples are abundant, and the problem

    cannot be captured in simple rules.

    Superior for cognitive tasks and processing of sensorial data such as vision, image- and

    speech recognition, control, robotics, expert systems.

    Correct operation biologic analogy not convincing but mathematical analysis andcomputer simulations needed.

    Technical neural networks ridiculously small w.r.t. brains good suggestions from biology

    Fascinating developments with NN possible : specificities of the user voice-controlledapparatus, and pen-based computing.

    To add sound in window or linux is not advanced as everybody knew it to do, so we

    thought of doing something new which can be beneficial to all and learn a new thing.

    To also learn different sound frequencies range of different emotions of human being tobe heard using Matlab.

    The purpose of this work is to study the emotion recognition method and its performance. Basedon this study, we plan to develop an automatic emotion recognizer, which can help people who

    have difficulties in understanding and identifying emotions to improve their social and

    interaction skills Such an assistive emotion recognition tool might help people with autism tostudy and practice social interactions.

  • 8/12/2019 Caps Ton on wind turbine

    17/29

    13

    DESIGN

    SYSTEM FLOW CHART

    MATLAB

    NEURAL

    NETWORK

    nprtool

    nftoolnctoolnmtool

    view

    FUNCTIONS OF GUI

    EMOTIONS

    SADANGER

    HAPPINESS CRYING

  • 8/12/2019 Caps Ton on wind turbine

    18/29

  • 8/12/2019 Caps Ton on wind turbine

    19/29

    15

    EXPERIMENTALWORKDONE

    Variables

    Variables are defined using the assignment operator, =. MATLAB is a weakly typed

    programming language. It is a weekly typed language because types are implicitly converted. Itis a dynamically typed language because variables can be assigned without declaring their type,

    except if they are to be treated as symbolic objects, and that their type can change. Values cancome from constants, from computation involving values of other variables, or from the output

    of a function. For example:

    >> x = 17

    x =

    17

    >> x = 'hat'

    x =

    hat

    >> y = x + 0y =

    104 97 116

    >> x = [3*4, pi/2]

    x =

    12.0000 1.5708

    >> y = 3*sin(x)

    y =

    -1.6097 3.0000

    Vectors/matrices

    As suggested by its name (a contraction of "Matrix Laboratory"), MATLAB can create andmanipulate arrays of 1 (vectors), 2 (matrices), or more dimensions. In the MATLAB vernacular,

    avectorrefers to a one dimensional (1NorN1) matrix, commonly referred to as an array in

    other programming languages. A matrix generally refers to a 2-dimensional array, i.e. an mnarray wherem andn are greater than 1. Arrays with more than two dimensions are referred to as

    multidimensional arrays. Arrays are a fundamental type and many standard functions natively

    support array operations allowing work on arrays without explicit loops.

    A simple array is defined using the syntax:init:increment:terminator. For instance:

    >> array = 1:2:9

    array =1 3 5 7 9

    defines a variable named array (or assigns a new value to an existing variable with the namearray) which is an array consisting of the values 1, 3, 5, 7, and 9. That is, the array starts at 1 (the

    initvalue), increments with each step from the previous value by 2 (the incrementvalue), and

    stops once it reaches (or to avoid exceeding) 9 (theterminatorvalue).

  • 8/12/2019 Caps Ton on wind turbine

    20/29

    16

    >> array = 1:3:9

    array =1 4 7

    theincrementvalue can actually be left out of this syntax (along with one of the colons), to use a

    default value of 1.

    >> ari = 1:5

    ari =

    1 2 3 4 5

    assigns to the variable named ari an array with the values 1, 2, 3, 4, and 5, since the default value

    of 1 is used as the incremented.

    Indexing is one-based, which is the usual convention for matrices in mathematics, although notfor some programming languages such as C, C++, and Java.

    Matrices can be defined by separating the elements of a row with blank space or comma andusing a semicolon to terminate each row. The list of elements should be surrounded by square

    brackets: []. Parentheses: () are used to access elements and sub arrays (they are also used to

    denote a function argument list).

    >> A = [16 3 2 13; 5 10 11 8; 9 6 7 12; 4 15 14 1]

    A =16 3 2 13

    5 10 11 8

    9 6 7 12

    4 15 14 1

    >> A(2,3)

    ans =11

    Sets of indices can be specified by expressions such as "2:4", which evaluates to [2, 3, 4]. Forexample, a submatrix taken from rows 2 through 4 and columns 3 through 4 can be written as:

    >> A(2:4,3:4)

    ans =

    11 87 12

    14 1

    A square identity matrix of sizen can be generated using the functioneye, and matrices of any

    size with zeros or ones can be generated with the functionszeros andones, respectively.

    >> eye(3)

  • 8/12/2019 Caps Ton on wind turbine

    21/29

    17

    ans =

    1 0 00 1 0

    0 0 1

    >> zeros(2,3)

    ans =0 0 0

    0 0 0>> ones(2,3)

    ans =

    1 1 11 1 1

    PREPARING FOR INTERFACE OF DIFFERENT SOUND FREQUENCY

    ALONG WITH EMOTIONS

    FREQUENCY RANGE OF MEANINGFUL SIGNAL

    A. What is the frequency range of meaningful signal?

    In general, human speech has a long frequency range. However, the important frequency range

    or meaningful frequency range is from 100 Hz to 5000 Hz . We have GRAPH for different

    sound frequency

    Fig-8: Frequency of meaningful signal

  • 8/12/2019 Caps Ton on wind turbine

    22/29

    18

    INTERFACE OF PROJECT

  • 8/12/2019 Caps Ton on wind turbine

    23/29

    19

    Code

    function varargout = cap(varargin)

    % CAP MATLAB code for cap.fig

    % CAP, by itself, creates a new CAP or raises the existing

    % singleton*.%

    % H = CAP returns the handle to a new CAP or the handle to% the existing singleton*.

    %

    % CAP('CALLBACK',hObject,eventData,handles,...) calls the local

    % function named CALLBACK in CAP.M with the given input arguments.%

    % CAP('Property','Value',...) creates a new CAP or raises the% existing singleton*. Starting from the left, property value pairs are% applied to the GUI before cap_OpeningFcn gets called. An

    % unrecognized property name or invalid value makes property application% stop. All inputs are passed to cap_OpeningFcn via varargin.

    %% *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one

    % instance to run (singleton)".

    %% See also: GUIDE, GUIDATA, GUIHANDLES

    % Edit the above text to modify the response to help cap

    % Last Modified by GUIDE v2.5 20-Nov-2012 10:42:52

    % Begin initialization code - DO NOT EDITgui_Singleton = 1;

    gui_State = struct('gui_Name', mfilename, ...

    'gui_Singleton', gui_Singleton, ...'gui_OpeningFcn', @cap_OpeningFcn, ...

    'gui_OutputFcn', @cap_OutputFcn, ...

    'gui_LayoutFcn', [] , ...'gui_Callback', []);

    if nargin && ischar(varargin{1})

    gui_State.gui_Callback = str2func(varargin{1});end

    if nargout

    [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});else

    gui_mainfcn(gui_State, varargin{:});

    end% End initialization code - DO NOT EDIT

  • 8/12/2019 Caps Ton on wind turbine

    24/29

    20

    % --- Executes just before cap is made visible.

    function cap_OpeningFcn(hObject, eventdata, handles, varargin)

    % This function has no output args, see OutputFcn.

    % hObject handle to figure% eventdata reserved - to be defined in a future version of MATLAB

    % handles structure with handles and user data (see GUIDATA)% varargin command line arguments to cap (see VARARGIN)

    % Choose default command line output for caphandles.output = hObject;

    % Update handles structure

    guidata(hObject, handles);

    % UIWAIT makes cap wait for user response (see UIRESUME)% uiwait(handles.figure1);

    % --- Outputs from this function are returned to the command line.function varargout = cap_OutputFcn(hObject, eventdata, handles)

    % varargout cell array for returning output args (see VARARGOUT);

    % hObject handle to figure

    % eventdata reserved - to be defined in a future version of MATLAB% handles structure with handles and user data (see GUIDATA)

    % Get default command line output from handles structurevarargout{1} = handles.output;

    % --- Executes on button press in pushbutton1.

    function pushbutton1_Callback(hObject, eventdata, handles)

    % hObject handle to pushbutton1 (see GCBO)

    % eventdata reserved - to be defined in a future version of MATLAB% handles structure with handles and user data (see GUIDATA)

    % --- Executes on button press in pushbutton2.

    function pushbutton2_Callback(hObject, eventdata, handles)

    % hObject handle to pushbutton2 (see GCBO)% eventdata reserved - to be defined in a future version of MATLAB

    % handles structure with handles and user data (see GUIDATA)

    % --- Executes on button press in pushbutton3.

  • 8/12/2019 Caps Ton on wind turbine

    25/29

    21

    function pushbutton3_Callback(hObject, eventdata, handles)

    % hObject handle to pushbutton3 (see GCBO)% eventdata reserved - to be defined in a future version of MATLAB

    % handles structure with handles and user data (see GUIDATA)

    % --- Executes on button press in pushbutton4.

    function pushbutton4_Callback(hObject, eventdata, handles)% hObject handle to pushbutton4 (see GCBO)

    % eventdata reserved - to be defined in a future version of MATLAB

    % handles structure with handles and user data (see GUIDATA)

    % --- Executes on button press in pushbutton5.

    function pushbutton5_Callback(hObject, eventdata, handles)% hObject handle to pushbutton5 (see GCBO)

    % eventdata reserved - to be defined in a future version of MATLAB% handles structure with handles and user data (see GUIDATA)

    % --- Executes on button press in pushbutton6.function pushbutton6_Callback(hObject, eventdata, handles)

    % hObject handle to pushbutton6 (see GCBO)

    % eventdata reserved - to be defined in a future version of MATLAB

    % handles structure with handles and user data (see GUIDATA)

    % --- Executes on button press in pushbutton7.

    function pushbutton7_Callback(hObject, eventdata, handles)% hObject handle to pushbutton7 (see GCBO)

    % eventdata reserved - to be defined in a future version of MATLAB

    % handles structure with handles and user data (see GUIDATA)

    % --- Executes on button press in pushbutton9.

    function pushbutton9_Callback(hObject, eventdata, handles)

    % hObject handle to pushbutton9 (see GCBO)% eventdata reserved - to be defined in a future version of MATLAB

    % handles structure with handles and user data (see GUIDATA)

  • 8/12/2019 Caps Ton on wind turbine

    26/29

    22

    EXPECTED OUTCOMES OF STUDY

    This project mainly aims at adding sound effects in Matlab software. So the expected outcome is

    that whenever we use Matlab Software we are assisted with sound while doing our work. The

    sound parameters are measured in nanometers and micron particles.

    Sound is perhaps the most sensuous element of multimedia. It is meaningful speech in anylanguage, from a whisper to a scream. It can provide the listening pleasure of music, the startling

    accent of special effects, or the ambience of a mood-setting background. Some feel-good

    music powerfully fills the heart, generating emotions of love or otherwise elevating listenerscloser to heaven. How you use the power of sound can make the difference between an ordinary

    multimedia presentation and a professionally spectacular one. Misuse of sound, however, can

    wreck your project. Try testing all 56 of your ringtones on a crowded bus: your fellowpassengers will soon wreck your day.

    When something vibrates in the air by moving back and forth (such as the cone of a

    loudspeaker), it creates waves of pressure. These waves spread like the ripples from a pebbletossed into a still pool, and when they reach your eardrums, you experience the changes ofpressure, or vibrations, as sound. In air, the ripples propagate at about 750 miles per hour, or

    Mach 1 at sea level. Sound waves vary in sound pressure level (amplitude) and in frequency or

    pitch. Many sound waves mixed together form an audio sea of symphonic music, speech, or justplain noise.

    Acoustics is the branch of physics that studies sound. Sound pressure levels (loudness or

    volume) are measured in decibels (dB); a decibel measurement is actually the ratio between achosen reference point on a logarithmic scale and the level that is actually experienced. When

    you quadruple the sound output power, there is only a 6 dB increase; when you make the sound

    100 times more intense, the increase in dB is not hundredfold, but only 20 dB. A logarithmic

    scale (seen below) makes sense because humans perceive sound pressure levels over anextraordinarily broad dynamic range.

    Sound is energy, just like the waves breaking on a sandy beach, and too much volume can

    permanently damage the delicate receiving mechanisms behind your eardrums, typically dullingyour hearing in the 6 kHz range. In terms of volume, what you hear subjectively is not what you

    hear objectively. The perception of loudness is dependent upon the frequency or pitch of the

    sound: at low frequencies, more power is required to deliver the same perceived loudness as for asound at the middle or higher frequency ranges. You may feel the sound more than hear it. For

    instance, when the ambient noise level is above 90 dB in the workplace, people are likely to

    make increased numbers of errors in susceptible tasksespecially when there is a high-frequency component to the noise. When the level is above 80 dB, it is quite impossible to use a

    telephone. Experiments by researchers in residential areas have shown that a sound generator at

    45 dB produces no reaction from neighbors; at 45 to 55 dB, sporadic complaints; at 50 to 60 dB,

    widespread complaints; at 55 to 65 dB, threats of community action; and at more than 65 dB,vigorous community action, possibly more aggressive than when you tested your ringtones on

    the bus. This neighborhood research from the 1950s continues to provide helpful guidelines for

    practicing rock musicians and multimedia developers today. Human hearing is less able toidentify the location from which lower frequencies are generated. In surround sound systems,

  • 8/12/2019 Caps Ton on wind turbine

    27/29

    23

    subwoofers can be placed wherever their energy is most efficiently radiated (often in a corner),

    but midrange speakers should be carefully placed.

    Impulse response components corresponding to a direct sound, an initially reflected sound, and a

    reverberation are separated from an original impulse response. A point N1 of which an impulse

    response component corresponding to a direct sound starts, a point N2 of which impulseresponse components corresponding to a direct sound and an initially reflected sound end, and a

    point N3 of which an impulse response component corresponding to a reverberation starts arerepresented by data associated with impulse response components. After the levels of the

    impulse response components corresponding to the direct sound and the initially reflected sound

    are adjusted, these impulse response components are combined with the impulse responsecorresponding to the reverberation. By performing a convolution calculation process for audio

    data and combined impulse response, a reverberation is generated and added to an original

    sound.

    The present invention relates to a sound effect adding apparatus such as may, for example, be

    used with a reverberation adding apparatus that adds a reverberation to an original audio signal.As an apparatus that adds a sound effect to an audio signal, a reverberator is known. The

    reverberator is used to add reverberation to an audio signal in for example a recording studio sothat listeners can have a spatial impression and a deep impression. When reverberation is added

    to an audio signal that has been recorded in a studio or the like, a sound effect performed in ahall and a special effect can be added to the audio signal.

    Formerly, to add reverberation to an audio signal, sound was recorded in for example a hall

    where reverberation was obtained. Alternatively, a steel-plate echo apparatus or the like was

    used to obtain a reverberative effect. In a recent reverberator, such an effect is electrically

    accomplished. More recently, as digital technologies have advanced, an apparatus that digitallysynthesizes reverberation is becoming common.

    When reverberation is added to an audio signal by a digital process, a recursive digital filter is,

    for example, used. With the recursive digital filter, an input digital audio signal is attenuated and

    recurred. Thus, reverberation is generated. The generated reverberation is mixed with the

    original digital audio signal. In reality, initial reflection sound is added at a position delayed by apredetermined time period against direct sound. After a predetermined time period, reverberation

    is added. The delay time period of the reverberation against the direct sound is referred to as pre-

    delay. By adjusting the reverberation time, adding sub-reverberation, and finely adjusting thelevel of reverberation, a variety of types of sound can be generated.

    Reverberation in a real hall has a complicated waveform because of various reflections andinterferences of sound due to the shape of the hall and the position of a sound source. However,

    as described above, in the method of which an original digital audio signal is processed with a

    filter, since the original signal is simply attenuated, there is the problem that the listeners have anartificial impression about the generated sound with the resultant signal.

  • 8/12/2019 Caps Ton on wind turbine

    28/29

  • 8/12/2019 Caps Ton on wind turbine

    29/29