From 45f495311fbf65fe2505922452d8e7aead1b619b Mon Sep 17 00:00:00 2001 From: Franck Cuny Date: Sun, 1 May 2022 13:42:43 -0700 Subject: static: add my resume as a static page --- static/resume.html | 209 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 209 insertions(+) create mode 100644 static/resume.html diff --git a/static/resume.html b/static/resume.html new file mode 100644 index 0000000..0f1cd83 --- /dev/null +++ b/static/resume.html @@ -0,0 +1,209 @@ + + + + + + + + Franck Cuny + + + + + +
+

Franck Cuny

+

franck@fcuny.net

+
+

I'm a seasoned Site Reliability Engineer with experience in large +scale distributed systems. I'm invested in mentoring junior and senior +engineers to help them increase their impact. I'm always looking to +learn from those around me.

+

Specializations: distributed systems, +containerization, debugging, software development, reliability.

+

Experience

+

Roblox, San Mateo

+ + + + + + + + + +
Site Reliability EngineerPrincipal (IC6)SRE GroupFeb 2022 - to date
+

I'm the Team Lead for the Site Reliability group that was started at +the end of 2021.

+

I'm defining the road-map and identify areas where SREs can partner +with different team to improve overall reliability of our services.

+

Twitter, San Francisco

+

Compute

+ + + + + + + + + + + + + + + +
Software EngineerSenior StaffCompute InfoAug 2021 - Jan 2022
Site Reliability EngineerSenior StaffCompute SREsJan 2018 - Aug 2021
+

Initially the Tech Lead of a team of 6 SREs supporting the Compute +infrastructure. In August 2021 I changed to be a Software Engineer and +was leading one of the effort to adopt Kubernetes for our on-premise +infrastructure. As a Tech Lead I helped define number of internal +processes for the team, from on-call rotations to postmortem +processes.

+

Twitter's Compute is one of the largest Mesos cluster in the world +(XXX thousands of nodes across multiple data centers). The team defined +KPIs, improved automation to mange the large fleet of bare metal +machines, defined APIs for maintenance with partner teams.

+

In addition to supporting Aurora/Mesos, I also lead a number of +effort related to Kubernetes, both on-premise and in the cloud.

+

Finally, I've helped Twitter save XX of millions of dollar in +hardware by designing and implementing strategies to significantly +improve the hardware utilization of our bare metal infrastructure.

+

Storage

+ + + + + + + + + +
Site Reliability EngineerStaffStorage SREsAug 2014 - Jan 2018
+

For 4 years I supported the Messaging and Manhattan teams. I moved +all the pub-sub systems from bare-metal deployment to Aurora/Mesos, +being the first storage team to adopt the Compute orchestration +platform. This helped reducing operations, time to deploy, and improve +overall reliability. I pushed for adopting 10Gb+ networking in our data +center to help our team to scale. I was the SRE Tech Lead for the +Manhattan team, helping with performance, operation and automation.

+

Senior +Software Engineer - Say Media, San Francisco

+ + + + + + + + + +
Software EngineerSenior SWEInfrastructureAug 2011 - Aug 2014
+

During my time at Say Media, I worked on two different teams. I +started as a software engineer in the platform team building the various +APIs; I then transitioned to the operation team, to develop tooling to +increase the effectiveness of the engineering organization.

+

Senior Software +Engineer - Linkfluence, Paris

+ + + + + + + + + +
Software EngineerSenior SWEInfrastructureJuly 2007 - July 2011
+

I was one of the early engineers joining Linkfluence in 2007. I led +the development of the company's crawler (web, feeds). I was responsible +for defining the early architecture of the company, and designed the +internal platforms (Service Oriented Architecture). I helped the company +to contribute to open source projects; contributed to open source +projects on behalf of the company; represented the company at numerous +open sources conferences in Europe.

+

Technical Skills

+ + + -- cgit 1.4.1