foreaign exchange data crawling and analysis for knowledge discovery leading to informative decision...
TRANSCRIPT
Foreign Exchange Data Crawling and Analysis for Knowledge Discovery Leading to Informative Decision
Making
Data miningDr. Nasser Ghadiri
Mostafa Arjmand
2
The Presentation Include The Following :
• Introduction
• Data Coverage And Size
• Framework Structure And The Data Fetching Mechanism
• Experimental study
• Ranking And Classification
• Conclusions
3
Introduction
4
Introduction :
• Foreign exchange (Forex) market
• Development a framework
• The framework allows streaming and visualization of historical (previous) and current currency prices in close to real time
• The framework benchmarks every monitored broker to decide whether he/she is trustworthy
5
Broker
• However, human decision making process is mostly subjective and emotional
• It is not necessary that every broker considers the right factors affecting exchange rate, or even gives the right weight to each
6
CTS-Forex Performance Study
Forex monitoring system designed to :• Automatically track and monitor brokers
• By fetching, visualizing and analyzing their announced exchange rate
• Our current effort focuses on EUR/USD exchange rate
• covers 24.1% of forex market
CTS FOREX PERFORMANCE STUDY MAIN FEATURES
7
Main Components Of the Proposed Framework :Visualizing the Data Streams
CTS FOREX PERFORMANCE STUDY MAIN FEATURES
The ability to visualize previously captured data as well as current data is essential for a expert
1) Single Broker Visualization2) Multiple Brokers Visualization3) Ranking and Classification Visualization
8
Main Components Of the Proposed Framework :Analyzing Captured Data
CTS FOREX PERFORMANCE STUDY MAIN FEATURES
The ability to analyze and interpret the captured data provides useful information for enhancingthe decision making process
Short term
Medium term
Long term
9
DATA COVERAGE AND SIZE
10
Data Coverage And Size :The Data Structure
Snapshot : The data for each broker • Date of the snapshot• Number of seconds recorded in the snapshotTick : Each piece of data is called a tick• Date of the tick• The ASK price of the currency• The BID price of the currency
Recording Periods
Indeed data is the main source for knowledge discovery leading to informative decision making
4/5 are ignored
11
Data Coverage And Size :Brokers Data Summary
• They monitored a total of 52 brokers• All the collected data used is from the
MetaTrader 4 platform
BROKERS SUMMARY TABLE
12
Data Coverage And Size :Daily Data Distribution (Monthly Stacked)
13
FRAMEWORK STRUCTURE AND DATA FETCHING MECHANISM
14
Framework Structure And The Data Fetching Mechanism
System Block Diagram
15
Framework Features
• Main function of this framework is capturing and visualizing stream data
• The number of online brokers is increasing
• This rapid increase has to be handled by our framework by having the ability of easily add new brokers
• Brokers use different platforms to provide traders with several features with minimum modifications
• The framework should provide runtime visualization as well as visualization of previously captured data
16
Framework Features
Framework should provide the ability to analyze data at three levels:
1) Short term analysis2) Medium term analysis3) Long term analysis
17
Available Resources
Meta Trader 4
• The software consists of two components: • Client • Server
• The server is used by brokers • To provide a client component to their clients
• The client provides end users with the ability to :• View live streaming prices• Place orders• Write their own scripts that automate the trading process
18
Client Applications Installation
• First mission is install the client applications (Meta Trader) to start fetching the data
• The problem we faced was fitting all clients on the same computer as they consume a lot of memory
• Distribute client applications • save more memory for short term analysis
19
Local Data Centralization
Client-Server Model\Database Insertion and Selection Model
The first model we came up with was a client/server modelThe second model is a database insertion and selection model
Although the first showed better results • Decided to stick to the second model
• It provides better integrity
20
Global Data Centralization
Global Data Centralization
We have all the fetched data in each computer centralized in its fetcher application
• This allows us to run short term analysis and • Some medium term analysis• But it does not allow us to run long term analysis
• brokers ranking • benchmarking
• Although centralizing data will have the drawback
21
Online/Local VisualizationOne of the implemented features was a visualization system for online users • Track their favorite brokers and compare them against others
• The first model was storing data into a MySQL database • The second model was pre-compiling the data into JSON data files
We still had to find a way to pre-compile the data stored locally on our computers and push it to the online server
Database model VS JSON model
22
Manager Application
Online Visualization Structure
The manager application will be responsible for• Running a lot of medium and long term analysis
• Ranking the brokers• Benchmarking them• Prioritizing the snapshots
23
EXPERIMENTAL STUDY
24
Experimental study
Windows 8 Pro with 32GB installed memory (RAM). Intel Core i7-3770 CPU @ 3.40GHz and 3.40 GHz.The first disk is a HDD second is a SSD.
DATA SET SUMMARY
Local centralization study
Visualization component study
25
Local Data Centralization StudyLocal Data Centralization (Model 1 vs Model 2 HHD) Week 4
26
Visualization Study
WEEK 1 - VISUALIZATION THROUGHPUT RESULTS (MODEL 1 VS MODEL 2)
WEEK 2 - VISUALIZATION THROUGHPUT RESULTS (MODEL 1 VS MODEL 2)
Fetching data from the database
Fetching data from pre-
compiled JSON files
27
RANKING AND CLASSIFICATION
28
Ranking And Classification
Our main focus is to study performance of brokers and find trustworthy ones
The two approaches are supplementary
29
Feed-price performance
Feed price performance key features Spread Consistency
30
Ranking And Classification1) The Algorithm:
31
Ranking And Classification
• The ranking algorithm was just the first step for distinguishing good brokers
• We do not have any gold standard rules to specify ranges of good and bad brokers’ scores
• Therefore, we decided to use a clustering algorithm
• For each ranked snapshot, we cluster the brokers based on their final scores
• Thus we are clustering one-dimensional data.
32
Clustering
HierarchicalDensity Based K-Means
Distribution of Brokers w.r.t. their Most Common Class
In our case, we want to find trustworthy brokers, gray, and bad ones
33
Ranking And Classification2) Membership:
To determine whether a broker is good or not, we have to study their memberships in all classes during their life cycle
34
CONCLUSIONS
35
CONCLUSIONS• We Present a group of issues that exist in the current forex market
• Designed and implemented a full framework• To monitor a list of brokers by fetching their data continuously
• Comparing them to each other and thus speed up and enhance domain experts’ decision making process
36THANK YOU FOR LISTENING
تکراری • مطالبشده • ارائه کار با مقاله تیتر نداشتن تطابقشده • ارائه پراکنده صورت به آینده کارهایجداول • و شکلها آوردن در سازماندهی و نظم داشتن عدماز • غیر برای EUR-USDچارچوببندی • خوشه مورد در کم توضیحاتمطالب • پراکندگیها • داده پنجم یکشده • ارائه چارچوب با کار نحوه چگونگی