gears of perforce: aaa game development challenges
TRANSCRIPT
Gears of PerforceSimon ClayThe Coalition / Gears of War 4
2016.4.7
2
AAA Game Development Challenges
Stats: Globally, over 500 contributors to the project. One branch is 5TB (inc. art), new starter sync: 800GB. Typical daily churn is 60GB ~ 120GB, peak 200GB.Todays Topics: Submitting from the other side of the world. Syncing challenges and solutions. Mirroring other repositories into ours.
Long Distance RelationshipsSubmitting GB’s around the globe
4
Submit using UDP
High latency connection from UK to Vancouver: 1 GB submit took 2 hours, now it’s 7 minutes. Use UDP service (Filecatalyst – there are others). Create auto-syncing network folder between the studios. Create Custom Tool in P4V to ‘side-submit’ – fastsubmit tool. Uses P4 as the communication channel – less moving parts. Submits locally from Vancouver once the files arrive. Fakes the UK user workspace to appear to have synced.
5
Submit using UDP
Complex process, multiple users. Made a GUI to visualize progress. Reliant on UDP service being up. Problem: UK proxy isn’t primed.
• Needs a side-sync to use local file.• side-sync looks for network file first.
proxy is eventually primed.• Considered locally priming the proxy.• Seemed complex and risky.
SyncingSolving a significant challenge
7
AAA Games are big!
In one location, we have >60Gb/day x250 PC’s (~15TB/day) Lots of well established routes to scale perforce:
• Multiple proxies/clusters with DNS load balancer, edge etc.• Network topography that scales, switch per proxy per ~20 PC’s.• Each Proxy needs SSD’s, pumping, maintenance and purchasing.
Even if you succeed, 100Gb at 1Gbit is 15~30 minutes.• 15 minutes x 250 PC’s is 60 hours / 1 “person week” a day is lost.• 10Gbit to desktop is next step – even more proxy infrastructure.
We needed something different than just scaling up.
8
Magic Hat Trick
You Will Need:• 1 Rabbit,• 1 Hat,• 1 Wand (Magic).
Instructions:• Hide Rabbit in Hat (do not show this).• Demonstrate Hat is empty.• Wave Magic Wand over Hat.• Remove Rabbit from Hat.
9
Prefetch: Always Be Collecting Turn syncing on it’s head: proactive rather than reactive.
• Push files into a separate directory: e.g. d:\prefetch• Get files onto PC’s the moment they are submitted.• Needs some extra local disc space.• Use the depot path and the #revision as the prefetch filename.
Create a presync that moves the prefetch file to the workspace.• flush sync (p4 sync -k) to keep P4 up to date.
This has the potential to deliver 100Gb sync in 10 seconds.• Generally, always less than 2 minutes, nice reliable sync times.
Users are no longer ‘afraid’ of syncing – at any time of the day. Attach the presync as a P4V Custom Tool.
• Or setup localhost p4broker with a sync rule that presyncs.
10
preview: sync -n check action check size check for file remove: sync#0 makedirs rename prefetch flush: sync -k readonly (!+w)
presync
11
Delivering Prefetch
Binaries only (no cr-lf problems), minimum size, e.g. > 1MB Serverless option: (small team, big data)
• p4 print -o d:\prefetch\depotpath\file.ext#rev //depotpath/file.ext#rev Prefetch server – similar to a proxy pump, continual sync. Place files on a network share. Build depotname#revision filenames. Delete old #revisions as new ones come in (focus on #head) On each PC: robocopy or rsync the files, limit the speed.
12
Delivering Prefetch
Using a fileshare only shifts the network load:• Great to get sync load off of perforce server and solve sync times.• Many commercial solutions to very high bandwidth filers, lots of choice.• Still lots of $$$ to solve the file distribution at scale problem.• Solving 10Gbit to the desktop is still challenging.
Or you could……
13
Delivering Prefetch – Using Multicasting Multicast the Prefetched files. Use a UDP file transfer tool. We use UFTP 4.9.1 There are others. This can be difficult! Worth it. Very efficient delivery to n PCs We transmit at 35 to 75 Mbit No server load (1x sync cost) 1x network bandwidth (1+n*0) It’s possible to pre-send files and avoid
sync delays altogether.
MirroringTurning Helix into Git the hard way…
15
Mirroring
‘Can you get the latest Unreal Engine from Epic?’ Never ask an engineer to do a repetitive task…… ……unless you want them to build a giant robot.
16
Mirroring Started in 2011 (before Perforce provided push/pull) Only needs regular user access on remote server 100% replication of complex integrations, many branches
17
Mirroring
Rolled out to Lionhead in 2012, Rare in 2013 Opened The Coalition P4 to Xbox UE4 collaborators Saw a need for internal collaboration around UE4 Created //studios – a depot that exists on 3 servers, is aware
of internal branching and branching from Epic’s mirror Allows Microsoft Studios UE4 partners to collaborate, without
emailing code around, without centralizing their repositories Changes suitable for UE4 licensees get pushed back to Epic
18
Epic RareLionheadCoalition
Mirroring
//studios is n*(n-1) mini-mirrorsIntegration replication within //studios + EpicIntegrations from projects turn up as add/editTools to seed cherry picks from projects with Epic base files
Actual Epic P4
Mirror ofEpic P4
Mirror ofEpic P4
Mirror ofEpic P4
//studios
Gears of War 4
Fable Legends
//studios //studios
Sea of Thieves
19
Mirroring
Allows work to be done in-project, anywhere, and then brought back to a central place to be refined and integrated onward
Distributed – robust to VPN or server outages Submit conflicts avoided with simple top level directories Benefits of a monolithic repository – changes can come from
anywhere can go anywhere Key users have accounts everywhere, know how to pull
changes into //studios to then integrate across projects with seed files
[email protected]@_s_clay• Submitting• Syncing• Multicasting• Mirroring