programmatically managing python workloads across multiple clouds

18
+ Programmatically Managing Python Workloads Across Multiple Clouds Chayim I. Kirshen ([email protected] / @chayimk)

Upload: chayim-kirshen

Post on 28-May-2015

667 views

Category:

Technology


3 download

DESCRIPTION

I delivered this talk at PyCon Canada 2012. The focus: understanding the role of DevOps, how it differs from the traditional model dev+operational deployment model. There's also some lightweight detail on implementing this using Python.

TRANSCRIPT

Page 1: Programmatically Managing Python Workloads  Across Multiple Clouds

+

Programmatically Managing Python Workloads Across Multiple CloudsChayim I. Kirshen ([email protected] / @chayimk)

Page 2: Programmatically Managing Python Workloads  Across Multiple Clouds

+Traditional Development

Developers write code, OPS supports code

OPS Teams eschew risk Systems that change are unknown

quantities All changes live forever

Page 3: Programmatically Managing Python Workloads  Across Multiple Clouds

+Whither DevOps

Traditional Operations

Supports operational objectives Service focused

Frequent manual intervention IT Mindset - Scripter,

SysAdmin, Paranoid Futurist

Strives for consistency accepts relativity

DevOps

Drives operational objectives Customer focused

Managing Operations through Automation and Development IT Developer - Coder,

SysAdmin, Coach, Paranoid Prepared Futurist

Ensures consistency through code

Page 4: Programmatically Managing Python Workloads  Across Multiple Clouds

+To Collaborate We Must Understand…

How can Operations formalize and track system change? Not just changes in code, but changes in infrastructure

How can Operations embrace change, pro-actively support organizational needs, but reduce risk?

How can Development work in simulated production?

How can we encourage collaboration?

Page 5: Programmatically Managing Python Workloads  Across Multiple Clouds

+Fit the Middle

Page 6: Programmatically Managing Python Workloads  Across Multiple Clouds

+Technology Stack

Django

libCloud

Celery

Paramiko

Puppet

Page 7: Programmatically Managing Python Workloads  Across Multiple Clouds

+Exploring the Django-ized Components

Page 8: Programmatically Managing Python Workloads  Across Multiple Clouds

+Where This Went

Create nodes across EC2 (us-west-1, us-west-2, us-east-1), and GoGrid

Parallelized workload creation/destruction

Configured all automatically with Puppet

Page 9: Programmatically Managing Python Workloads  Across Multiple Clouds

+Settings Loading is Painful

settings.py gets to be too big

Easier to reduce testing impact by segregating services

Separate the concern

# settings.pythismodule = sys.modules[__name__]helpers.settings_loader(thismodule, CONFIG_DIR)

Page 10: Programmatically Managing Python Workloads  Across Multiple Clouds

+# helpers.pydef settings_loader(module_base, CONFIG_DIR):

if CONFIG_DIR not in sys.path: sys.path.append(CONFIG_DIR)

settings_files = os.listdir(CONFIG_DIR) for setting in settings_files: if setting[-3:] != ".py": continue

# import the module module = __import__(setting[:-3]) for key in dir(module):

# hidden variables should never be imported, they're either # internals for our imported module - or not worth importing if key[:2] == "__": continue

# reflect in, and ensure the setting we're loading is a string # only those can be dynamicaly loaded. setattr(module_base, key, getattr(module, key))

Page 11: Programmatically Managing Python Workloads  Across Multiple Clouds

+Configuring Components

libCloud_settings.py

root = os.path.abspath(os.path.dirname(__file__)

AWS_ACCESS_KEY = ”<xxx>”

AWS_SECRET_KEY = ”<yyy>”

SSH_USERNAME = "ubuntu”

SSH_KEY = os.path.join(root, “yourkey.pem”)

celery_settings.py

BROKER_URL = 'mongodb://localhost:27017/celeries'

BACKEND_URL = 'mongodb://localhost:27017/celery_results'

Page 12: Programmatically Managing Python Workloads  Across Multiple Clouds

+Unfogging libCloud

Page 13: Programmatically Managing Python Workloads  Across Multiple Clouds

+

EC2Driver = get_driver(Provider.EC2)

self.driver = EC2Driver(EC2_ACCESS_ID, EC2_SECRET_KEY)

image = NodeImage(id=“ami-87712ac2”, name="", driver="")

size = NodeSize(id=“m1.small”, name="", ram=None, disk=None, bandwidth=None, price=None, driver="")

self.driver.create_node(name=name, image=image, size=size,

ex_keyname = sshkeyname,

ex_securitygroup = securitygroup)

nodes = self.driver.list_nodes()

Page 14: Programmatically Managing Python Workloads  Across Multiple Clouds

+Node Pattern

Connect to node Touch /etc/publicip Set hostname Puppetmaster Address in /etc/hosts

Register with the puppet master puppet agent --waitforcert 60 –-verbose –

server=puppet.master

Sign cert on the puppet master

Puppetize the client

Page 15: Programmatically Managing Python Workloads  Across Multiple Clouds

+Working with Vegetables

Django integrated python manage.py

migrate celery || python manage.py syncdb

python manage.py celery worker –loglevel=info

Standalone celery -A tasks worker

--loglevel=info -B –E

All celeryable tasks go in tasks.py for your django project

Decorate functions with @task

Call existing function with your code with <name>.delay(arg, arg, arg)

Page 16: Programmatically Managing Python Workloads  Across Multiple Clouds

+

Celerized object is always returned result = <foo>.delay(arg, arg, arg) result.ready == True|false

For blocking Add all cerery results to a list Iterate looking for completion

Page 17: Programmatically Managing Python Workloads  Across Multiple Clouds

+Parallelization Problems

SSH Fabric == AWESOMESAUCE! Fabric == Singleton Paramiko!

Logging Prefixing log lines with proposed host name

Page 18: Programmatically Managing Python Workloads  Across Multiple Clouds

+To Collaborate we understood!

How did Operations formalize and track system change? Revision control, code, comment

How did Operations embrace change, pro-actively support organizational needs, but reduce risk? Deploy infrastructure through automation, without impacting

production

How did Development work in simulated production? Simulating the setup, with a different cloud provider

How did we encourage collaboration? Developers and OPS write puppet code Develops and OPS sit together Developers and OPS learn from each other