This is what you'll get.
Ok, first we need a working copy of the python library jedi.
So we use pip(you use pip right?) and install it system wide.
If you have a virtualenv active, first
$ deactivate
Then install it
sudo pip install jedi
Then lets install the plugins we will need, this step will vary, i use pathogen so all i have to do is clone the plugins' repos to my bundle dir and i'm done. If this is your case:
cd ~/.vim/bundle
git clone https://github.com/davidhalter/jedi-vim.git
git clone https://github.com/Shougo/neosnippet.git
git clone https://github.com/Shougo/neocomplcache.git
Otherwise visit those urls and follow their installation instructions.
Ok, this is the really valuable step, a copy paste. This settings are the work of a try and error. So just paste them in your .vimrc file and be done with it.
" NEOCOMPLCACHE SETTINGS
let g:neocomplcache_enable_at_startup = 1
imap <expr><TAB> neosnippet#expandable() ? "\<Plug>(neosnippet_expand_or_jump)" : pumvisible() ? "\<C-n>" : "\<TAB>"
smap <expr><TAB> neosnippet#expandable() ? "\<Plug>(neosnippet_expand_or_jump)" :
let g:neocomplcache_force_overwrite_completefunc = 1
if !exists('g:neocomplcache_omni_functions')
let g:neocomplcache_omni_functions = {}
endif
if !exists('g:neocomplcache_force_omni_patterns')
let g:neocomplcache_force_omni_patterns = {}
endif
let g:neocomplcache_force_overwrite_completefunc = 1
let g:neocomplcache_force_omni_patterns['python'] = '[^. \t]\.\w*'
set ofu=syntaxcomplete#Complete
au FileType python set omnifunc=pythoncomplete#Complete
au FileType python let b:did_ftplugin = 1
" Vim-jedi settings
let g:jedi#popup_on_dot = 0
And if you don't have filetype and indent detection in your vim
filetype plugin indent on
And that is it, if you have Vim open, close it and open it again. Enjoy your awesome autocompletion.
Notes on web development
domingo, 6 de enero de 2013
sábado, 10 de noviembre de 2012
Slicing a list when you only know half the slice
I wanted a function that 'crawled' all the users in a django queryset so the whole problem was the old, how to work on a whole list but in small chunks. I needed this for performance reasons because it was expensive to try to generate all the user's thumbnails at once and expecting that nothing would go wrong was insane. Apparently facebook servers are not as stable as you think.
So less bla bla y more code:
def fanfunders_subset(self,start=None,end=None):
return UserProfile.objects.fanfunders()[start:end]
So i would call this function like this in a loop
fanfunders_subset(0,10)
# and in the next iteration
fanfunders_subset(10,20)
At this moment i was thankful python lists end in -1, so slicing [0,10] will get you the first number the decond and up to the ninth, if it didnt ended in -1 i would have to do something akward like [0,10] [11,20] [21,30] and that looks ugly.
So what was the trick? oh yeah, you can slice with None! in this context
>>> list[0,None] == list[0,-1]
True
And then
>>> list[None,-1] == list[0,-1]
True
So you can only give on the two arguments to the function and it will return the correct slice, yeah this trick is more for that, to get this flexible interface, sometimes you want a chunk from the end of the list and sometimes you want it from the begining and somethings you want in the middle, lots of cases are covered safely here it just looks a little weird if you see this at first, thats why i wrote this post :P So more people know it and you don't judge me for using weird tricks in Python.
So less bla bla y more code:
def fanfunders_subset(self,start=None,end=None):
return UserProfile.objects.fanfunders()[start:end]
So i would call this function like this in a loop
fanfunders_subset(0,10)
# and in the next iteration
fanfunders_subset(10,20)
At this moment i was thankful python lists end in -1, so slicing [0,10] will get you the first number the decond and up to the ninth, if it didnt ended in -1 i would have to do something akward like [0,10] [11,20] [21,30] and that looks ugly.
So what was the trick? oh yeah, you can slice with None! in this context
>>> list[0,None] == list[0,-1]
True
And then
>>> list[None,-1] == list[0,-1]
True
So you can only give on the two arguments to the function and it will return the correct slice, yeah this trick is more for that, to get this flexible interface, sometimes you want a chunk from the end of the list and sometimes you want it from the begining and somethings you want in the middle, lots of cases are covered safely here it just looks a little weird if you see this at first, thats why i wrote this post :P So more people know it and you don't judge me for using weird tricks in Python.
domingo, 4 de noviembre de 2012
Connection pooling in heroku with Django
You should use this:
So Postgresql is your thing on heroku with django, and you think to your self, how can i shave miliseconds of my response time, while doing practically nothing.
Your answer, database connectiong pooling.
In your settings.py
import urlparse
url = urlparse.urlparse(os.environ['HEROKU_POSTGRESQL_GOLD_URL'])
path = url.path[1:]
path = path.split('?', 2)[0]
DATABASES = {
'default': {
'ENGINE': 'dbpool.db.backends.postgresql_psycopg2',
'OPTIONS': {'max_conns': 1},
'HOST': url.hostname,
'NAME': path,
'OPTIONS': {},
'PASSWORD': url.password,
'PORT': url.port,
'TEST_CHARSET': None,
'TEST_COLLATION': None,
'TEST_MIRROR': None,
'TEST_NAME': None,
'TIME_ZONE': 'America/Mexico_City',
'USER': url.username
}
}
#If you use south migrations
SOUTH_DATABASE_ADAPTERS = {
'default': 'south.db.postgresql_psycopg2',
}
And in your requirements.txt
django-db-pool==0.0.7
So, numbers, i'll just leave my new relic graph here.
This is a graph of 7 days with response time on the x axis and time on the y axis.
That bump in response time, that orangeish growth is the database taking 50% more time answering because django has to setup the connection to the database on each request.
It is pretty clear with connection pooling > 200 ms response time, after > 300 ms. thats a 50% increase in your app performance for basically changing a setting.
Notice:
Heroku limits the 9 dls database plan to 20 connections, and because the connections are persistent thanks to the pool, youll be permanently using more(the exact number i don't know) but be careful when running out connections, if you have a lot of workers writing to the database and you restart the web app it might run out of connections and you'll site will go down(experience talking here).
You'll have to migrate to crane the 50 dls plan to avoid the connection limit. So be careful out there, it happened to me with a cronjob using the connections leaving my main app connection thirsthy and really down.
Update:
I was totally wrong, as it turns out, i changed the pooling setting at the same time heroku migrated to the new databases, and they are the guilty ones for this increase in response time. tzk tzk tzk heroku.
So Postgresql is your thing on heroku with django, and you think to your self, how can i shave miliseconds of my response time, while doing practically nothing.
Your answer, database connectiong pooling.
In your settings.py
import urlparse
url = urlparse.urlparse(os.environ['HEROKU_POSTGRESQL_GOLD_URL'])
path = url.path[1:]
path = path.split('?', 2)[0]
DATABASES = {
'default': {
'ENGINE': 'dbpool.db.backends.postgresql_psycopg2',
'OPTIONS': {'max_conns': 1},
'HOST': url.hostname,
'NAME': path,
'OPTIONS': {},
'PASSWORD': url.password,
'PORT': url.port,
'TEST_CHARSET': None,
'TEST_COLLATION': None,
'TEST_MIRROR': None,
'TEST_NAME': None,
'TIME_ZONE': 'America/Mexico_City',
'USER': url.username
}
}
#If you use south migrations
SOUTH_DATABASE_ADAPTERS = {
'default': 'south.db.postgresql_psycopg2',
}
And in your requirements.txt
django-db-pool==0.0.7
So, numbers, i'll just leave my new relic graph here.
This is a graph of 7 days with response time on the x axis and time on the y axis.
That bump in response time, that orangeish growth is the database taking 50% more time answering because django has to setup the connection to the database on each request.
It is pretty clear with connection pooling > 200 ms response time, after > 300 ms. thats a 50% increase in your app performance for basically changing a setting.
Notice:
Heroku limits the 9 dls database plan to 20 connections, and because the connections are persistent thanks to the pool, youll be permanently using more(the exact number i don't know) but be careful when running out connections, if you have a lot of workers writing to the database and you restart the web app it might run out of connections and you'll site will go down(experience talking here).
You'll have to migrate to crane the 50 dls plan to avoid the connection limit. So be careful out there, it happened to me with a cronjob using the connections leaving my main app connection thirsthy and really down.
Update:
I was totally wrong, as it turns out, i changed the pooling setting at the same time heroku migrated to the new databases, and they are the guilty ones for this increase in response time. tzk tzk tzk heroku.
domingo, 23 de septiembre de 2012
Install pygraphviz on mountain lion
So you are trying to install pygraphviz on OSX and you get this:
$pip install pygraphviz
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/Users/grillermo/.virtualenvs/bandtastic/build/pygraphviz/setup.py", line 89, in <module>
raise OSError,"Error locating graphviz."
OSError: Error locating graphviz.
Don´t worry you can install graphviz with Brew(you are already using brew arent you?)
like this:
brew install graphviz
But you now have to add an extra enviromental variable for the pip installer, all you have to do after brew is done is:
export PKG_CONFIG_PATH=/usr/local/Cellar/graphviz/2.28.0/lib/pkgconfig
And voilá, pip installer will run without problems.
You might be wondering why we exported that dir and not other, because thats where the file libcgraph.pc is located.
$pip install pygraphviz
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/Users/grillermo/.virtualenvs/bandtastic/build/pygraphviz/setup.py", line 89, in <module>
raise OSError,"Error locating graphviz."
OSError: Error locating graphviz.
Don´t worry you can install graphviz with Brew(you are already using brew arent you?)
like this:
brew install graphviz
But you now have to add an extra enviromental variable for the pip installer, all you have to do after brew is done is:
export PKG_CONFIG_PATH=/usr/local/Cellar/graphviz/2.28.0/lib/pkgconfig
And voilá, pip installer will run without problems.
You might be wondering why we exported that dir and not other, because thats where the file libcgraph.pc is located.
miércoles, 15 de agosto de 2012
Johnny-cache on Heroku
Hi, today i got the awesome caching library for Django johnny cache working on heroku.
What you will need
1. My fork of johnny cache that adds a new cacheing backend with support for
django-pylibmc-sasl==0.2.4
pylibmc==1.2.3
2. A heroku memcachier addon
3. The settings.
STEP 1
For development install my fork locally
Now add these lines to your requirements.txt, at the end of it
pylibmc==1.2.3
STEP 2
Install the addon on your app
By all means if you encounter problems, contact me @grillermo or leave a comment here, i suffered this installation and my experience could help somebody in need.
On a related issue i couldnt uninstall already installed packages by pip so i had to fix my buildpack so heroku respects these command
heroku labs:enable user_env_compile --app bandtastic
heroku config:add CLEAN_VIRTUALENV=true
What these will do is make heroku reinstall all the packages from your requirements.txt everytime you push, so make sure you only do this once you update your johnny cache installation with my repo and then you should do
To use my build pack run
heroku config:add BUILDPACK_URL=git://github.com/grillermo/heroku-buildpack-python.git
UPDATE
DONT DO IT, do not install johnny cache on heroku, at least using Memcachier this is my new relic performance log
Look at that! wtf im using the C client pylibmc, response time almost trippled!
What you will need
1. My fork of johnny cache that adds a new cacheing backend with support for
django-pylibmc-sasl==0.2.4
pylibmc==1.2.3
2. A heroku memcachier addon
3. The settings.
STEP 1
For development install my fork locally
pip install git+git://github.com/grillermo/johnny-cache.git
Now add these lines to your requirements.txt, at the end of it
git+git://github.com/grillermo/johnny-cache.git
django-pylibmc-sasl==0.2.4pylibmc==1.2.3
STEP 2
Install the addon on your app
heroku addons:add memcachier:dev --app yourHerokuApp
STEP 3
Add these to your settings file
os.environ['MEMCACHE_SERVERS'] = os.environ.get('MEMCACHIER_SERVERS', '')
os.environ['MEMCACHE_USERNAME'] = os.environ.get('MEMCACHIER_USERNAME', '')
os.environ['MEMCACHE_PASSWORD'] = os.environ.get('MEMCACHIER_PASSWORD', '')
CACHES = {}
CACHES['default'] = {
'BACKEND': 'johnny.backends.memcached.PyLibMCCacheSasl',
'BINARY': True,
'JOHNNY_CACHE': True,
'LOCATION': 'localhost:11211',
'OPTIONS': {
'ketama': True,
'tcp_nodelay': True,
},
'TIMEOUT': 500,
}
JOHNNY_MIDDLEWARE_KEY_PREFIX='a_nice_string_of_your_choosing'
# The first middleware on the list
MIDDLEWARE_CLASSES = (
'johnny.middleware.LocalStoreClearMiddleware',
'johnny.middleware.QueryCacheMiddleware',
...
)
By all means if you encounter problems, contact me @grillermo or leave a comment here, i suffered this installation and my experience could help somebody in need.
On a related issue i couldnt uninstall already installed packages by pip so i had to fix my buildpack so heroku respects these command
heroku labs:enable user_env_compile --app bandtastic
heroku config:add CLEAN_VIRTUALENV=true
What these will do is make heroku reinstall all the packages from your requirements.txt everytime you push, so make sure you only do this once you update your johnny cache installation with my repo and then you should do
heroku config:remove CLEAN_VIRTUALENV
To go back to normalityTo use my build pack run
heroku config:add BUILDPACK_URL=git://github.com/grillermo/heroku-buildpack-python.git
DONT DO IT, do not install johnny cache on heroku, at least using Memcachier this is my new relic performance log
Look at that! wtf im using the C client pylibmc, response time almost trippled!
lunes, 25 de junio de 2012
Heroku django new database settings
Heroku just announced that injection of database settings, meaning:
import os
import sys
import urlparse
# Register database schemes in URLs.
urlparse.uses_netloc.append('postgres')
urlparse.uses_netloc.append('mysql')
try:
# Check to make sure DATABASES is set in settings.py file.
# If not default to {}
if 'DATABASES' not in locals():
DATABASES = {}
if 'DATABASE_URL' in os.environ:
url = urlparse.urlparse(os.environ['DATABASE_URL'])
# Ensure default database exists.
DATABASES['default'] = DATABASES.get('default', {})
# Update with environment configuration.
DATABASES['default'].update({
'NAME': url.path[1:],
'USER': url.username,
'PASSWORD': url.password,
'HOST': url.hostname,
'PORT': url.port,
})
if url.scheme == 'postgres':
DATABASES['default']['ENGINE'] = 'django.db.backends.postgresql_psycopg2'
if url.scheme == 'mysql':
DATABASES['default']['ENGINE'] = 'django.db.backends.mysql'
except Exception:
print 'Unexpected error:', sys.exc_info()
Will cease to happen, heroku won't be adding the database settings, luckly they provided a simple solution. here are the steps i followed for my django app.
Note: i recommend you try this first in a copy of your app, you are using a staging copy right?
Assumptions/Requirements
dj-database-url==0.2.1
Then add the heroku plugin user_env_compile to your app
heroku run cat your_project/settings.py
Outputs something like thisimport os
import sys
import urlparse
# Register database schemes in URLs.
urlparse.uses_netloc.append('postgres')
urlparse.uses_netloc.append('mysql')
try:
# Check to make sure DATABASES is set in settings.py file.
# If not default to {}
if 'DATABASES' not in locals():
DATABASES = {}
if 'DATABASE_URL' in os.environ:
url = urlparse.urlparse(os.environ['DATABASE_URL'])
# Ensure default database exists.
DATABASES['default'] = DATABASES.get('default', {})
# Update with environment configuration.
DATABASES['default'].update({
'NAME': url.path[1:],
'USER': url.username,
'PASSWORD': url.password,
'HOST': url.hostname,
'PORT': url.port,
})
if url.scheme == 'postgres':
DATABASES['default']['ENGINE'] = 'django.db.backends.postgresql_psycopg2'
if url.scheme == 'mysql':
DATABASES['default']['ENGINE'] = 'django.db.backends.mysql'
except Exception:
print 'Unexpected error:', sys.exc_info()
Will cease to happen, heroku won't be adding the database settings, luckly they provided a simple solution. here are the steps i followed for my django app.
Note: i recommend you try this first in a copy of your app, you are using a staging copy right?
Assumptions/Requirements
- heroku gem past 2.18.1. version(this is important, upgrade your gem now)
gem install heroku
otherwise you will get a
No superclass method 'app'
error - Django 1.3 or higher
- You will need a custom buildpack thats updated to account the user_env_compile addon, add it to your app like this
heroku config:add BUILDPACK_URL=git@github.com:heroku/heroku-buildpack-python.git --app your_app_name
Or if you need M2Crypto support you will need my custom buildpack that is just a mix of the standard python pack with a guybowdens pack that enabled the support for SWIG needed by M2Crypto.
heroku config:add BUILDPACK_URL=git://github.com/grillermo/heroku-buildpack-python.git --app your_app_name
dj-database-url==0.2.1
Then add the heroku plugin user_env_compile to your app
heroku labs:enable user_env_compile --app your_app_name
And then add the ENV variable to disable injectionheroku config:add DISABLE_INJECTION=1
Now add these lines to your settings.py
import dj_database_url
DATABASES = {'default': dj_database_url.config(default='postgres://localhost')}
Now make a commit and push to heroku
git commit -m 'disable heroku injection of database settings, manually add them'
git push origin master
You can check is not injecting the settings anymore with:
heroku run cat your_project/settings.py
And at the end of your settings there should not be anything you did not put.
References:
References:
miércoles, 13 de junio de 2012
recursive s3 uploading from python
I recently had the need to upload a directory with many files to s3
so i wrote a small function to to this for me, this is not optimized at
all, as it will only send one file at the time, and it doesn't take into
account the 5gb limitation on s3 uploads, but is the simplest that
could get the job done.
This function traverse through a folder and uploads the files with the correct key so they show up inside their respective 'folders' on the s3 management panel, i say 'folders' because there is no such concept on s3 only, keys that can be optionally prepended by slashes so '/home/static/files/lol.py' is the key of the 'lol.py' file, and thats how you request it.
This function checks if the key already exists only uploading missing keys.
So here it is, in all its slow glory.
Also, sometimes amazon fails to find a key its supposed to be there, so i had to add a little try, except to handle these cases, i had to upload ~4000 files and only had problems with 10, so i guess this is the 99.9% availability amazon claims to have with s3 files.
def uploadResultToS3(awsid,awskey,bucket,source_folder):
c = boto.connect_s3(awsid,awskey)
b = c.get_bucket(bucket)
k = Key(b)
for path,dir,files in os.walk(source_folder):
for file in files:
relpath = os.path.relpath(os.path.join(path,file))
if not b.get_key(relpath):
print 'sending...',relpath
k.key = relpath
k.set_contents_from_filename(relpath)
try:
k.set_acl('public-read')
except:
failed.write(relpath+', ')
failed.close()
This function traverse through a folder and uploads the files with the correct key so they show up inside their respective 'folders' on the s3 management panel, i say 'folders' because there is no such concept on s3 only, keys that can be optionally prepended by slashes so '/home/static/files/lol.py' is the key of the 'lol.py' file, and thats how you request it.
This function checks if the key already exists only uploading missing keys.
So here it is, in all its slow glory.
Also, sometimes amazon fails to find a key its supposed to be there, so i had to add a little try, except to handle these cases, i had to upload ~4000 files and only had problems with 10, so i guess this is the 99.9% availability amazon claims to have with s3 files.
import os
import boto
from boto.s3.key import Key
failed = open('failers','w') import boto
from boto.s3.key import Key
def uploadResultToS3(awsid,awskey,bucket,source_folder):
c = boto.connect_s3(awsid,awskey)
b = c.get_bucket(bucket)
k = Key(b)
for path,dir,files in os.walk(source_folder):
for file in files:
relpath = os.path.relpath(os.path.join(path,file))
if not b.get_key(relpath):
print 'sending...',relpath
k.key = relpath
k.set_contents_from_filename(relpath)
try:
k.set_acl('public-read')
except:
failed.write(relpath+', ')
failed.close()
You obviously need the boto library, get it first
pip install boto
Suscribirse a:
Entradas (Atom)