kalanand mishra kaon neural net 1 retraining kaon neural net kalanand mishra university of...

14
Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati

Upload: paulina-powers

Post on 14-Jan-2016

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati

Kalanand Mishra Kaon Neural Net 1

Retraining Kaon Neural NetRetraining Kaon Neural Net

Kalanand MishraUniversity of Cincinnati

Page 2: Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati

Kalanand Mishra Kaon Neural Net 2

• This exercise is aimed at improving the performance of KNN selectors.

• Kaon PID control samples are obtained from D* decay: D*+D0 [K-+] s+

Track selection and cuts used to obtain the control sample is described in

detail in BAD 1056 ( author : Sheila Mclachlin ).

• The original Kaon neural net (KNN) training was done by Giampiero Mancinelli & Stephen Sekula in circa 2000, analysis 3, using MC events ( they didn’t use PID control sample). They used 4 neural net input variables: likelihoods from SVT, DCH, DRC (global) and K momentum.

• I intend to use two additional input variables: track based DRC likelihood and polar angle () of kaon track.

• I have started the training with PID control sample (Run 4). I will repeat the same exercise for MC sample and also truth-matched MC events.

• Due to higher statistics and better resolution in the control sample available now, I started with a purer sample ( by applying tighter cuts).

• Many thanks to Kevin Flood and Giampiero Mancinelli for helping me getting started and explaining the steps involved.

MotivationMotivation

Page 3: Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati

Kalanand Mishra Kaon Neural Net 3

KK--ππ++ invariant mass in control sample invariant mass in control sample

No P* cut Purity within 1 = 96 %

P* > 1.5 GeV/cPurity within 1 = 97 %

Conclusion : P* cut improves signal purity. We will go ahead with this cut.Other cuts: K-π+ vertex prob > 0.01 and require DIRC acceptance.

Page 4: Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati

Kalanand Mishra Kaon Neural Net 4

| m | m D*D* - m - mDD0 0 | distribution in control sample | distribution in control sample

Conclusion : P* cut doesn’t affect ∆m resolution.

Page 5: Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati

Kalanand Mishra Kaon Neural Net 5

Momentum and cosMomentum and cos didistributions stributions

Kaon P

Pion P

Kaon coscos

Pion coscos

Very similar distributions for K and π

Almost identical dist. for K and π

Page 6: Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati

Kalanand Mishra Kaon Neural Net 6

PPlablab vs cos vs cos di distributionstribution

Kaon Pion

Conclusion : Almost identical distributions for Kaon and Pion except on the verticalleft edge where soft pions make slightly fuzzy boundary.

Page 7: Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati

Kalanand Mishra Kaon Neural Net 7

Purity as a function of Kaon momentumPurity as a function of Kaon momentum

Purity = 93 % Purity = 97 % Purity = 98 %

Purity = 98 % Purity = 98 % Purity = 98 %

Page 8: Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati

Kalanand Mishra Kaon Neural Net 8

NN input variablesNN input variables

Pscaledscaled

scaledscaled

scaled scaled not a input var

SVT lh

Inputs vars are: P, , svt-lh, dch-lh, glb-lh, trk-lh.

Page 9: Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati

Kalanand Mishra Kaon Neural Net 9

NN input variables NN input variables

DCH lh DRC-glb lh

DRC-trk lh

Inputs vars are: P, , svt-lh, dch-lh, glb-lh, trk-lh.

Page 10: Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati

Kalanand Mishra Kaon Neural Net 10

NN output at optimal point NN output at optimal point A sample of 120,000 events with inputs : svt-lh, dch-lh, glb-lh, trk-lh, P and

Page 11: Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati

Kalanand Mishra Kaon Neural Net 11

Signal performanceSignal performanceA sample of 120,000 events with inputs : svt-lh, dch-lh, glb-lh, trk-lh, P and

Page 12: Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati

Kalanand Mishra Kaon Neural Net 12

Background performanceBackground performanceA sample of 120,000 events with inputs : svt-lh, dch-lh, glb-lh, trk-lh, P and

Page 13: Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati

Kalanand Mishra Kaon Neural Net 13

Performance vs number of hidden nodesPerformance vs number of hidden nodesA sample of 120,000 events with inputs : svt-lh, dch-lh, glb-lh, trk-lh, P and

Saturates ataround 18

Page 14: Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati

Kalanand Mishra Kaon Neural Net 14

I have set up the machinery and started training K neural net.One way to proceed is to include P and as input variables after flattening the sample in P - plane ( to get rid of the in-built kinematic bias spread across this plane).The other way is to do training in bins of P and cos. This approach seems more robust but comes at the cost of more overheads and requires more time and effort. Also, this approach may or may not have performance advantage over the first approach.By analyzing the performance of neural net over a sample using both of these approaches, we will decide which way to go.The performance of the neural net will be analyzed in terms of kaon efficiency vs. pion rejection [ and also kaon eff vs. pion rej as a function of both momentum and ].Stay tuned !

SummarySummary