PostgreSQL is an amazing database, no surprises here. But every now and then I discover an amazing little feature that I was missing, this week was the time to discover
Suppose that you have two tables
student and want to list every student enrolled in a given discipline.
Usually I did this with a quick Python script by grouping the course and than using than making a simple
', '.join(). With the aggregate
string_agg function I now do it directly in the query.
In my personal project, I have a few functional tests that cover most used features from an end user perspective.
Until now I’ve used Selenium and PhantomJS but starting with version 3.8.1 the support for
PhantomJS is now deprecated and the recommendation is to run Firefox or Chrome in a headless mode.
I’m not a big fan of having important packages pinned for compatibility reasons and as soon as I got a bit of free time I decided to replace
Chrome and update
Selenium to the latest version.
So, my last blog was on July and I’ll use this one to talk about a bit of the stuff I’ve been messing with in this time.
Open source stuff
If you look at my GitHub profile the activity in those last couple months or so was rather low.
Those green dots are mainly commits in my little side project pets, I worked on it doing internationalization and I hope that when it’s 100% done the project will be useful to more people. It was a very straightforward job, my only complain was the inconsistency between the compilemessages and makemessages commands. But this was fixed on Django 1.9 so when I upgrade the project to the next LTS release of Django everything will be cool here.
Uns meses atrás fiz um screencast pra dar um overview do sistema de cache do Django pra uns amigos, o tempo passou e esqueci de compartilhar aqui também.
Como não tem muito material tentei mostrar alguns exemplos dos tipos diferentes de uso do cache.
I know we are living in the era of zero-downtime but, given my website user base and the fact that it’s really simple and small, I thought an old school “maintenance page” would be enough for me at the moment.
My deployment process is executed by a single command, so it’s quite stable and fast.
For the sake of completeness, I would talk a bit about my deployment process.
I have a project where there’s a model with a profile imagem field. This field is required.
I did not want to maintain an image file in the repository just for tests, so I decided to research other solutions.
Imagem file at model’s creation
I’ve created a helper method that returns an ImageField, which then I use at the object’s creation with the model’s manager:
English version here.
No meu projeto, tenho um modelo que possui uma imagem de perfil. Esse campo é de preenchimento obrigatório.
Como eu não queria manter uma imagem no repositório só para teste, resolvi pesquisar outras soluções.
Arquivo de imagem na criação do modelo
Eu criei um método auxiliar para criar um ImageField que uso quando crio um objeto diretamente pelo manager do modelo:
So last month I decided to try something different, I decided to try to commit something on GitHub every day for 30 days.
Few days ago I was at my 30th day. In those 30 days I’ve committed around 45 times.
I’ve made a series of small improvements on an open source website I maintain, solved a few exercises from Google’s Python Class, and closed 3 simple bugs for Kuma, the project behind the Mozilla Developer Network.
I’m going on an adventure
Three months ago I quitted my job as a Java developer and move out of desktop development for the first time.
Currently, I’m working as a web developer, both backend and frontend, at a university in my hometown. A lot of new things to learn, an environment completely new.
In this project, we are a team of two developers. The application is almost 15 years old and the source code is a disaster. All the developers that used to work in this project left the company almost at the same time.
When I decide to accept the job, our initial idea was to modernize the frontend stack, that was besides from using old technologies and have a completely messy code; and to restructure the backend code to better separate the responsibilities.
Abrir uma conexão com o PostgreSQL não é propriamente a definição da palavra lento, mas é uma coisa que pode ser facilmente otimizada.
Se você usa o banco de dados em um servidor diferente, ou mesmo se usa localmente, abrir uma nova conexão pode demorar alguns milissegundos.
Eu uso o Opbeat para monitorar a performance do meu projeto, observando o breakdown do tempo gasto em cada camada da aplicação, dá pra observar que mesmo 27.5ms representam, na minha Home, 23% do tempo de chamada da requisição.