021018 price gridtestbeds us view
TRANSCRIPT
-
8/11/2019 021018 Price Gridtestbeds Us View
1/36
1
Coordinating New Grid Proposals
Larry Price
HEP-CCC
18 October 2002
-
8/11/2019 021018 Price Gridtestbeds Us View
2/36
9/19/2014 R.Pordes,Fermilab 2
STATUS OF
HIJTB and GLUE
Ruth Pordes, Fermilab
Representing the Grid Interoperability Working Groups:
-
8/11/2019 021018 Price Gridtestbeds Us View
3/36
9/19/2014 R.Pordes,Fermilab 3
Joint Technical Board
No meetings held since last HICB in order to continue progress on GLUE Stage I deliverables.
Search for a Co-chair may finally be over - Ian Bird - LHC Computing Project GridDeployment Area Manager is proposed (and would accept) the responsibility.
We ask for your endorsement of this.
Plan to hold JTB meeting to propose 2 new GLUE/JTB sub projects in the next couple of weeks -revert to usual slot of 4 pm INFN - 7 am Pacific time on 1st Monday of each month.
Proposed new sub-projects
Distribution and Meta-Packaging Validation and Test suites.
-
8/11/2019 021018 Price Gridtestbeds Us View
4/36
9/19/2014 R.Pordes,Fermilab 4
Distribution and Meta-Packaging
Interoperable distribution and configuration utilities identified as a definite need by all the recent trans-atlantic
demonstration and validation work.
EDG uses RPMs as the packaging standard and LCFG as the distribution tool.
VDT uses PACMAN as a meta-packaging and distribution tool which can support RPMs, Tar files, GPT
packaging etc.
VDT includes some RPMs from EDG releases. DataTAG supporting pacman for interoperability needs.
For the IST/SC demos products will be supported as RPMs and through pacman as needed.
Support for this group comes from:
EDG technical management (Bob Jones), Trillium WP4 manager (Olof Baring), LCG DataTAG All these projects would contribute to the work of the group.
Proposal (agreed by DataTAG WP4, iVDGL interoperability and line managements) is that co-chairs are Flavia
Donno - LCG and Alain Roy - VDT. Members to include Olof, Saul, ..
-
8/11/2019 021018 Price Gridtestbeds Us View
5/36
9/19/2014 R.Pordes,Fermilab 5
Validation and Test Suites
Current status:
LCG hired a Validation and Test manager as part of the Grid Deployment area. EDG extending test, regression and validation tests in collaboration with LCG Globus-iVDGL test harness developed as part of GLUE VDT test scripts
LCG 1.0 includes validation of accepted/deployed middleware.
GLUE project proposal states that will hand ongoing responsibility of interoperability to grid
middleware projects - EDG, VDT.
Proposal is to initiate a collaborative project in this area between LCG, VDT, and EDG under the
umbrella of the JTB.
-
8/11/2019 021018 Price Gridtestbeds Us View
6/36
9/19/2014 R.Pordes,Fermilab 6
GLUE - Overview
Glue is currently managed and mainly staffed by the 2 projects with interoperability in their deliverables:
DataTAG WP4 and iVDGL. goals, scope and deliverables V0.1.2 posted in June http://www.hicb.org/glue/glue-v0.1.2.doc and
referred to in Edinburgh. Stage I:
Phase 1: in test and incorporation in software releases. Phase 2: ~30% done. need to address new components: RLS, job scheduler to take in to account
Data Distribution
Phase 3: (authorization, data handling, packaging) Discussions and collaboration started
V0.1.3 (Oct 11th) edits from Antonia (EDT WP4 manager). Introduces start of goals for GLUE Stage II;
Interoperability of the next releases of DataGRID and VDT. More effort on interoperability between the users and the Grid as a collection of different grid
domains.
Interoperability between users and grid includes job descriptions, job monitoring, datalocalization, quality of service specification
Nee Any Glue code, configuration and documents developed will be deployed and supportedthrough EDG and VDT release structure
Once interoperability demonstrated and part of the ongoing culture of global grid middleware projects Glue
should not be needed.
http://www.hicb.org/glue/glue-v0.1.3.dochttp://www.hicb.org/glue/glue-v0.1.3.doc -
8/11/2019 021018 Price Gridtestbeds Us View
7/369/19/2014 R.Pordes,Fermilab 7
Reminder of nature of EDG and VDT ReleasesEDG Releases are the collected tagged and supported
set of middleware and application software packages
and procedures from the European DataGrid project
available as RPMs with a master location.
Includes Application Software fordeployment and testing on EDG sites
Most deployment expects most/all packagesto be installed with small set of uniform
configuration.
Well developed mechanisms for softwareinstallation, validation and release.
Includes rebuilt and configured Globussoftware with a few modifications.
VDT - Virtual Data Toolkit - Releases are the
collected tagged and supported set of middleware and
common utilities from the US Physics Grid Projects
(GriPhyN, iVDGL, PPDG) . VDT scope is currentless than that of the EDG:
Does not today include collective andhigher level services such as User
Interfaces, Storage Management.
Configuration Procedures less evolved Meta-packaging and distribution
mechanisms follow component model.
Uses published releases of Globus asBinaries + versioned experimental patches
if needed.
EDG and VDT: base layer of software/protocols were/are same:
GLOBUS :X509 certificates,; GSI Authentication; GridFTP MDS LDAP based monitoring and resourcediscovery framework GRAM job submission protocol and interface.
Authorization: LDAP VO service
CONDOR: Matchmaking (classAds); Grid Job Scheduling ; Planning Language (dagman)
File Movement: GDMP; GridFTP
Storage Control Interface SRM
-
8/11/2019 021018 Price Gridtestbeds Us View
8/369/19/2014 R.Pordes,Fermilab 8
The Glue project - the people involved day to daySergio Andreozzi DataTAG Schema, Glue testbed Carl
KesselmaniVDGL,Globus
Schema
Olof Baring EDG WP4 Schema, Information
providers
Peter
Kunszt
EDG WP2 Schema, Data Movement and
Replication
Rick Cavanaugh GriPhyN,
iVDGL
Applications Doug Olson PPDG Authentication, Authorization
Roberto Cecchini EDG,DataTAG
Authentication,Authorization
Ruth Pordes PPDG,iVDGL
Testbeds, Applications
Vincenzo
Ciaschini
DataTAG Glue testbed, job
submission
David
Rebatto
DataTAG Applications
Ben Clifford iVDGL,Globus
MDS development Alain Roy iVDGL,Condor
Virtual Data Toolkit packaging,support.
Ewa Deelman iVDGL,Globus
Schema, VOOperations
Dane Skow PPDG Authentication, Authorization
LucaDellAgnello
DataTAG Authentication,Authorization
Scott Gose iVDGL,Globus
Testbed operations, Gluevalidation tests
Alan DeSmet PPDG,
Condor
Applications Massimo
Sgaravatto
EDG WP1 Schema, Job Scheduling
Flavia Donno EDG,
DataTAG,LCG
Applications, Job
Submission, dataMovement
Jenny
Schopf
PPDG,
iVDGL,Globus
Schema, Monitoring
Sergio Fantinel DataTAG Applications Arie
Shoshani
PPDG,
LBNL
Storage Interface (SRM)
Enrico Ferro DataTAG Distribution,Applications
FabioSpataro
DataTAG Authentication, Authorization
Rob Gardner iVDGL Applications, Testbed Regina Tam EDG WP5 Schema
JerryGieraltowski
PPDG Applications BrianTierney
PPDG,LBNL
Schema, Monitoring
John Gordon EDG WP5 Storage Schema andServices
LucaVaccarossa
DataTAG Applications
David Groep EDG Authorization CristinaVistoli
DataTAG Schema, Coordination
Leigh Grunhoefer iVDGL Authentication,Testbed
SaulYoussef
iVDGL Software Distribution,Applications
-
8/11/2019 021018 Price Gridtestbeds Us View
9/369/19/2014 R.Pordes,Fermilab 9
A & A
Authentication further needs:
Automated procedures to disseminate revocation; adddress issues of security of private keys.
Authorization:
LDAP V0 servers + automated generation of gridmapfiles work for now.
Plans to evaluate of EDG 1.2.2. LCAS and VOMS in the US (PPDG-SiteAA, VDT) and for
GT 2.2 CAS testing in the EU.
Need to develop full support for Virtual Organizations - membership of multipleorganizations, dynamic policy based authorization to sub-groups in a VO, Authorizationby service and/or resource etc.
Plan that the PPDG SiteAA project will report status and recommendations for the path PPDG
should take at our December 19th steering meeting in ISI/Caltech.
Would like to invite some European counterparts.
Would like to stimulate some concrete project proposals.
European meeting in December will be attended by US representatives.
(If you can stomach much mail on the subject join the PPDG SiteAA mail list!)
-
8/11/2019 021018 Price Gridtestbeds Us View
10/369/19/2014 R.Pordes,Fermilab 10
Job Submission
In Place Today: Basic Interoperable Job Submission in both directions between EDGVDT
One can directly submit a job to a GRAM or Computer-Gateway - identifying the locationof the input and output data. This has been tested between EDG and Globus installations using testjobs - Globus test
harness - and application jobs - CMS and ATLAS simulation, D0 analysis.
Initial interoperability tests of dynamic job scheduling have been done in both directionswith CMS MOP between DataTAG CMS testbed nodes and US Florida CMS nodes
Resource Discovery basic interoperability in final test -
the Glue Schema - the structure and attributes of Information loaded into the MDSLDAP database. (for the EDG among us: R-GMA schema are mapped to/from MDSschema.).
Information Providers (static and dynamic) fill values into MDS LDAP schema. Resource Broker/Planning
EDG 1.2 Resource Broker and User Interface have been used on VDT sites and
through Altas-Grappa portal. Available as 3 standalone RPMs and can be included inVDT if need be.
EDG 2.0 Resource Broker and VDT Condor-G negotiator/dagman well aligned.Details of interoperability to be discussed at meeting at Fermilab 18th October.
Chimera discussions at early stage.
-
8/11/2019 021018 Price Gridtestbeds Us View
11/369/19/2014 R.Pordes,Fermilab 11
Glue Schema
Compute Information - nodes, clusters, jobs: CE V1.0 Schema available
included in VDT 1.1.3 release - separate package during development and testing. adapted EDG resource broker (consumer) in test. WP4 Information Providers in test on glue testbed. MDS Information Providers using Ganglia in test by Globus.
Storage Service Information: SE V1.0 available
will be included in VDT 1.1.3. EDG WP5 Information Providers in progress. Proposal to make separate RPMs of IPs so can be included in future VDT releases.
CE-SE relation for job scheduling: CESEBIND V1.0 available. This basically expresses the
relationship between Storage and Compute resources for use in Planning and Scheduling.
Above Planned for inclusion in EDG November release
-
8/11/2019 021018 Price Gridtestbeds Us View
12/369/19/2014 R.Pordes,Fermilab 12
Glue Schema cont..
Network Information (NE)about to start. Will be based on common work between IERM-BW
(Les Cottrell/Warren Matthews) work with UK WP7 and PPARC (Richard Hughes Jones,Peter Clarke).
Remaining issues from CE/SE are those of VO and User information. String reference included
in SE but ongoing discussions of what is the correcr approach.
All schema described in UML and implemented in LDIF.
Work remains to
Write policy and user documentation Test with R-GMA for EDG; Continue collaboration with CIM. Meet with Nordugrid to see if they will incorporate Glue Schema in a future release.
The Nordugrid schema currently define Compute Resources and are simpler than theGLUE Schema.
-
8/11/2019 021018 Price Gridtestbeds Us View
13/369/19/2014 R.Pordes,Fermilab 13
Job Description, User Interfaces and Portals
Interoperability requires that a submitted Job (Data Processing or Analysis Job either from the
command line, program calls or through a graphical interface) may run on any Authorized
resource that provides the requirements - whatever these may be from the User, Application
Infrastructure, VO, and OS.
We would also like Interoperability to mean that the User can be sitting at any site to submit the
job and retrieve the results.
Recent successes:
EDG UI/RB jobs submitted from ATLAS Grappa portal. (requires UI and Grappa web
services on same machine) EDG RB jobs submitted to VDT sites for CMS MOP jobs. VDT site submission of CMS MOP job to EDG RB. Globus Gatekeeper submissions between EDGVDT sites. EDG job submission and RB on CMS VDT machine in US VDT client installation added EDG RB, and JSS and submitted job to testbed
Proposal to look at common User/Application/System Job Description Languages and ideas for
well specified portal services are in early stages
-
8/11/2019 021018 Price Gridtestbeds Us View
14/369/19/2014 R.Pordes,Fermilab 14
Data Storage and Movement
Disk and Tape Access and Management
Interoperability tests (using GridFTPand SRM V1.0) within the US
between (2 at a time!) Fermilab,
JLAB, LBNL, Wisconsin have
started with some success. It is hoped
that tests with EDG sites can start
soon.
SRM V2 final spec is goal ofDecember workshop at CERN.
No commonality on Posix I/O layer
and semantic access to data. There aremultiple definitions for this in use:
RFIO, DCCP, XIO. This needs to be
looked at as part of the December
workshop.
Data Movement, Management and Access
In place today (EDG,VDT): GridFTP,GDMP - both well tested between
VDT EDT installations.
Experiment Higher level services fordata management and access tested by
the experiment testbeds: MAGDA,ALIEN, REFDB/BOSS, etc etc.
Waiting for RLS frozen release (orlate beta) for GLUE testing to start.
-
8/11/2019 021018 Price Gridtestbeds Us View
15/369/19/2014 R.Pordes,Fermilab 15
GLUE PHASE I Validation
In Place today: Demonstrations showing interoperability at one time and location:
Component level tests - AA, GRAM, GridFTP, GDMP, MDS two node application test - MOP, ATLAS
In progress for tomorrow: Multi-site application demonstrations with automation in
installation and configuration in progress for November conferences
ATLAS simulation
CMS MOP and Virtual DataNeeded for next year: Ongoing system demonstrations. Formal Validation that needs to be
ideally done for each new release of software and system change
EDG test scripts VDT test harness and scripts
? Need more work here
-
8/11/2019 021018 Price Gridtestbeds Us View
16/369/19/2014 R.Pordes,Fermilab 16
Intergrid Demonstrations for IST2002 and SC2002 -
WOLRDGRID
Goal is to demonstrate Applications and Test programs operating on a grid of computers and storage
across EU and US sites some of which have EDG 1.2 release installed and some of which haveVDT 1.1.3 installed; including some US sites with EDG and some EU sites with VDT.
Working week run by Rob - iVDGL coordinator - and Flavia - then DataTAG WP4.3/4.4 coordinatorheld at Fermilab 2 weeks ago resulted in much work by all and Experiment Applications allsuccessfully tested standalone.
Since then core team working full time to package, deploy, test, and run demonstrations on as many
WorldGrid nodes as possible:Flavia, Sergio, Sergio, David, Marco, Silvia - DataTAG
Rob, Jorge, Saul, Leigh, Vijay, Alain, Nosa, Scott - iVDGL
Gerry, Dantong, Rich, Alan - PPDG
Will be another working week before IST2002 (4-6 Nov) in Rome.
This is a major effort which is resulting in demonstratable trans-atlantic demos.
WEB site: http://www.ivdgl.org/demo
Mailing list: [email protected]
Archive: http://web.datagrid.cnr.it/hypermail/archivio/igdemo
http://www.ivdgl.org/demohttp://www.ivdgl.org/demo -
8/11/2019 021018 Price Gridtestbeds Us View
17/369/19/2014 R.Pordes,Fermilab 17
Demos of Monitoring and Experiment Applications
Monitoring across all sites using:
CMS/GENIUS, ATLAS/GRAPPA, EDG/MAPCENTER (EDG/WP7), iVDGL/GANGLIA, DataTAG/NAGIOS
ATLAS simulation jobs submitted through EDG/JDL and/or GRAPPA portal running on ATLAS-
EDG, ATLAS-VDT and CMS-VDT sites.
CMS simulation jobs submitted with IMPALA/BOSS interfaced to VDT/MOP, EDG/JDL and/orGenius portal.
SDSS and CMS virtual data jobs.
Demo testbed with Common GLUE and EDG schema and authorization/authentication tools
VO (DataTAG and iVDGL) LDAP Servers in EU and US. Identified Repositories with software distribution - DataTAG and iVDGL. Demo testbed with Common GLUE and EDG schema and authorization/authentication
tools.
-
8/11/2019 021018 Price Gridtestbeds Us View
18/369/19/2014 R.Pordes,Fermilab 18
Grid Middleware: LCG-1, HEPCAL, and GLUE
LCG-1 requires identification and validation of Grid Middleware
by end of 2002
Overall requirements on Grid Technology stated by HEPCALdocument from SC2 RTAG in June.
Detailed response from EDG given in August/Sept; Detailedresponse from US VDT, CMS/ATLAS S&C managers inSept/Oct.
Joint summary from LCG GTA area - Fab, Miron, Ruth, Marcel -
for SC2 in November.
Bottom line reported by me at the grid deployment boardhttp://www.hicb.org/glue/LCG_GDB_100202.ppt is
http://www.hicb.org/glue/LCG_GDB_100202.ppthttp://www.hicb.org/glue/LCG_GDB_100202.ppt -
8/11/2019 021018 Price Gridtestbeds Us View
19/369/19/2014 R.Pordes,Fermilab 19
Summary - from GDB
The issue of grid interoperability, grid middleware interoperability, interfacing, protocols and common solutionshas been identified.
There has been signficant progress in terms of understanding and starting to address the technical and political
specifics.
A base interoperating infrastructure which is being tested with existing experiment applications is in place onwhich to build and deploy a first LCG production grid.
There is a recognition that there is still much work remaining to make this base complete in detail and robust inoperation.
Significant problems are being encountered in any end-to-end grid system or testbed deployment and operation.
Fault detection and reporting, diagnosis and troubleshooting will take a lot of manpower and will benefit fromattention to interoperability issues.
Recent EDG slide:Recent Problems: MDS Instabilities: Top problemknock-on problems for users. problems with match making. finding replicas Moved back to manualconfiguration of II. Script used to monitor status of sites.
- Logging & Bookkeeping: Buckled under very large number of simultaneous requests.. Abuse of dg-job-statusall. Can improve with purging database.
- Testing for 1.2.xJob Manager (GASS cache): Rate problem gone when using fork job manager. Bug found with pbs backend scripts. reported to Globus and fixed need to repeat testing Haventconfirmed or denied: EDG GK works with new job manager.. Combination works with WP1 job submission.
- MDS: Dynamically-linked MDS 2.2 works; gross failure modes gone. Statically-linked MDS is not possible because of LDAP. Will need to deploy parallel Globus trees to have this work.
Recent US CMS Testbed 1 day test list of issues:condor-g gahp_server crashing on ramen (fermi), tracked down to globus library bug (globus i/o misusinggram close, or visa-versa)- for unknown reasons, condor-g working on tito (could be b/c of older condor-g from older vdt, could be network/sys c onfig, unknown), so we move there- jobs going on hold, unable to restart -- due to jobmanagers at florida crashing and leaving corrupted state files -- fixed by jaime after many test/debug cycles- some jobs got stuck on old jobmgrs with corrupted state -- we made them start over from scratch by either hacking their contact string in queue, or by removing them and submitting the rescue
DAG- impala's createcmsimjobs no longer recognizes -g option -- solved by backing out to older version- MOP's stage-out still relied on gdmp (i had thought we fixed that a while back) and gdmp is hosed, so we had to make it do an on-demand g-u-c (wrapped in ftsh) like the stage-in stage did- fd's and inodes getting dangerously high on testulix, upped the levels higher- had to upgrade to latest DAR at worker sites (auto-install-from-master script is alleged to no longer work but we didn't try)- impala had references to cmsim125.1 in config files, needed to change them to cmsim125.3- some sites still have local globus/condor config mistakes preventing jobs from running- network outages at UW slowed progress but the software recovered properly- power outages at UW slowed progress but the software recovered properly- worker sites changed during the run (ramen was rebuilt, testulix cluster passwd file corruption, someone accidentally condor_rm'ed jobs on ramen) ausing jobs to be lost and forcing us to useDAGMan rescue files at the master site to re-submit them to MOP- globus job-manager bug where it stops polling/updating job status and constantly reports the same status until killed and restarted by hand- some globus job-managers report unable to find files in GASS cache -- no diagnoses yet, jobs nuked & resubmitted from MOP master site- DAGMan crashed and was unable to recover one small DAG; bug not diagnosed yet, had to be recovered by hand (!)- when DAG was recovered, too many simultaneous publish stages (~25) started at once; they might have completed eventually but to be safe we caused them all and released them five at a timeuntil the finished- zombie globus gatekeeper processes were hanging around on testulix; diagnosed as globus i/o bug but not a show-stopper yet so ignored- zombie globus job-manager proceses were hanging around in testulix; diagnosed as globus i/o bug but not a show-stopper yet so ignored
-
8/11/2019 021018 Price Gridtestbeds Us View
20/36
9/19/2014 20
European Grid Programme
Update
Fabrizio [email protected]
www.cern.ch/egee-ei
mailto:[email protected]:[email protected] -
8/11/2019 021018 Price Gridtestbeds Us View
21/36
21
New EU FP6 Budget overview
ICT RI-Budget in FP5 (to compare): 161m
Additional budget for Grids in other IST areas
300m fo r Gant, Grids , o ther ICT
Research In frastru ctu res in FP6
Geant:80m
Grids:30m Others:41m(including admin.Costs)
-
8/11/2019 021018 Price Gridtestbeds Us View
22/36
22
Indicative roadmap of calls
1. Budget from Structuring the ERA Programme (200m)
Year 2003 Year 2006Year 2005Year 2004
50m
100m
50m
2. Budget from IST (100m)
Year 2003 Year 2006Year 2005Year 2004
?m ?m
-
8/11/2019 021018 Price Gridtestbeds Us View
23/36
9/19/2014 23
EGEE initiative Goal
create a general European Gridproduction quality infrastructure
Build on
EU and EU member states majorinvestment in Grid Technology
Several good prototype results
Goal can be achieved for a minimum of100m/4 years on top of the national andregional initiatives
Approach:
Leverage current and plannednational programmes
Work closely with relevant industrialGrid developers and NRNs
Build on existing middeware andexpertise
EGEE
applications
network
-
8/11/2019 021018 Price Gridtestbeds Us View
24/36
24
Research challenges include, but are not restricted to:
-
8/11/2019 021018 Price Gridtestbeds Us View
25/36
25
ese c c e ges c ude, bu e o es c ed o:
Advancing fundamental research and the technical state of the art of IT andassessing its impacts on other fields of science and engineering, including:
o Extending the capability to process, manage, and communicate
information on a global scale beyond what we imagine today. Thisincludes new paradigms for communication, networking and data
processing in large-scale, complex systems.
o Understanding how to extend, or scale up, the network infrastructure toinclude an extremely large number of computing and monitoring systems,embedded devices and appliances.
o Exploring new research directions and technical developments to enable
wide deployment of pervasive IT through new classes of ubiquitous
applications and creation of new ways for knowledge acquisition andmanagement.
o Exploiting the power of IT and networking infrastructures to enable robustsecure and reliable delivery of critical information and services anytime,anywhere, on any device.
Expanding our capacity to respond through IT to new opportunities and to lowerthe lag time between concept and implementation. This includes work directly
focused on education, workforce, and productivity issues as well as scientific andengineering research.
Providing new computational, simulation, and data-analysis methods and tools tomodel physical, biological, social, behavioral, and mathematical phenomena. This
can include the creation of novel hardware, the development of computationaltheory and paradigms coupled with research on a target application, or theenabling of distributed, dynamic data-driven applications and data-intensive
computing.
-
8/11/2019 021018 Price Gridtestbeds Us View
26/36
2014/9/1926
HEP Data GRID in Asia
Yoshiyuki Watase
Computing Research Center
KEK
HICB/GGF5 Chicago
13 Oct. 2002
-
8/11/2019 021018 Price Gridtestbeds Us View
27/36
27
Outline
Network Infrastrucutre
GRID Activities Korea
China
Taiwan
Japan
Possible collaboration
Conclusion
-
8/11/2019 021018 Price Gridtestbeds Us View
28/36
2014/9/19 28
Network Infrastructure
NII- NY
TRANSPAC
Taiwan-US
Japan NII) -NY : 2.4G x 2 Jan. 2003
Japan US: 622M x 2 TRANSPAC)
Korea US: 45 M
Korea Japan: 2.4G Jan. 2003
China IHEP) Japan KEK): 128 kbps HEP)
China US: 10 M
Taiwan Japan: 155 M
Taiwan US: 622 M Dec. 2002)
TEIN
-
8/11/2019 021018 Price Gridtestbeds Us View
29/36
2014/9/19 29
Asia-Pacific Advanced Network:APAN (May 02)
-
8/11/2019 021018 Price Gridtestbeds Us View
30/36
2014/9/1930
GRID Activities at Korea
People at
CHEP:Center for HEP at Kyungpook N U
Universities: Soul N U, Yonsei U, Korea U Donchil Sun (CHEP/KNU)
Experiments
LHC/CMS, AMS, HERA/ZEUS
CDF, RHIC/PHENIX
KEKB/Belle, K2K
Testbeds for EU DataGRID, iVDGL
Working Grid enviornment at CHEP/KNU - SNU
Network
to US APII/APAN 45 Mbps
to EU TEIN 2Mbps -> 20 Mbps
to JP APII/APAN 8 MbpsGENKAI Project 1 Gbps (Jan. 2003)
-
8/11/2019 021018 Price Gridtestbeds Us View
31/36
2014/9/1931
GRID Activities at China
People at
IHEP(Beijing), Univ. s
Chuansong Yu(IHEP) Experiments
LHC/CMS, Atlas
BES(Beijing Spectrometer)
Testbed under preparation
32 x 2 PC farms+ RAID + Tape( with CATOR)
Grid environment test within site
Network
HEP dedicated line to KEK 128kbps -> 2
-
8/11/2019 021018 Price Gridtestbeds Us View
32/36
2014/9/1932
GRID Activities at Taiwan
People at Natl Taiwan U, Natl Central U, Acadmia Sinica
Ping Yeh(NTU), Simon Lin(AS) Experiments
LHC/CMS(NTU, NCU), Atlas(AS)
CDF(AS)
Testbed Comp. Center at AS
Network to US 622Mbps (Dec. 2002)
to JP APAN 155Mbps
-
8/11/2019 021018 Price Gridtestbeds Us View
33/36
2014/9/1933
GRID Activities at Japan
People at
KEK, ICEPP(U. Tokyo), Titech, AIST(Tsukuba)
Experiments
LHC/Atlas, KEKB/Belle
CDF
Testbeds
for Atlas ICEPP(U Tokyo)KEK
for Gfarm KEK- TitechAIST
-
8/11/2019 021018 Price Gridtestbeds Us View
34/36
2014/9/1934
R&D in Japan
Gfarm Development by AIST, Titech, KEK
Architecture PC farm with large local disk/node
Large data file is divided into fragments and stored in thedisks by read-in
Data file integrity is managed by the Gfarm metadata DB
Data I/O by parallel file system
Affirmative process scheduling for data residence
Service daemon process: gfd is running at each node
Authentication by GSI
-
8/11/2019 021018 Price Gridtestbeds Us View
35/36
2014/9/1935
National
Backbone
National
Backbone
Super
SINET
Super
SINET
Cluster and Network setting for SC2002 Bandwidth Challenge (9/3)
SCinet
NOC
3com 4924
10 Gbps
By courtesy of
Force10
SC2002, Baltimore
Indiana Univ.
Cluster
SDSC
Cluster
Indianapolis
GigaPoP
NOC
Tokyo
NOC
OC-12 x 2
Japan
TransPAC
KEK
Cluster
Titech
Cluster
AIST
Cluster
ICEPP
Cluster
AIST
M160
Tsukuba-U
Maffin
M160
GbE
10 GbE
GbE
Star Light
Pacif ic NW Gigapop
GbE
NOC
AIST
BoothAlternative
Tsukuba WAN
Abilene
GbE -> ?10-12 nodes
5 TB, 1 GB/s
HP4000
15 nodes
180 GB4-6 nodes2 TB, 400 MB/s
Abilene
Gfarm Demonstration at SC2002Data transfer, data replication over transoceanic network
-
8/11/2019 021018 Price Gridtestbeds Us View
36/36
2014/9/1936
Conclusions
Grid activities are starting in each country
Testing between a few institutes International test: for LHC/Atlas, CMS in 2003
Possible collaboration for KEKB/Belle
KEKKorea KEKTaiwan
Network is emerging for heavy application users.
Testbed in collaboration with CS people