Building a Twitter GenServer with ExTwitter part 2

Posted on Tue 13 September 2016 in elixir • Tagged with elixir, twitter, extwitter, genserverLeave a comment

Premise boilerplate ;)

This article tackles my learning experience in building a GenServer process that talks with Twitter. I am learning the Elixir language in my evenings, so bear with me and please comment if you will find inaccuracies in this article.

I will not comment every line of code of my experiment, however feel free to drop a comment if there's something that triggers your interest.

Objective

This is the second part of building a little GenServer that talks to Twitter by taking advantage of the ExTwitter elixir module.

It will focus on the actual implementation of the GenServer callbacks that are handling the communication with Twitter, being a stream or a normal Twitter search.

You can find the first part of the article here

I could take advantage of special effects... however here is the GenServer implementation!

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
def handle_call(%{search: topic}, _from, state) do
  tweets = ExTwitter.search(topic)
  {:reply, tweets, Map.put(state, :tweets, tweets)}
end

def handle_call(:entries, _from, state) do
  {:reply, Enum.reverse(Map.get(state, :tweets, [])), state}
end

def handle_call(:stop_stream, _from, %{stream: stream_pid, tweets: tweets}) do
  ExTwitter.stream_control(stream_pid, :stop)
  Process.exit(stream_pid, :normal)
  {:reply, :ok, %{tweets: tweets}}
end

def handle_call(:stop_stream, _from, state) do
  {:reply, :stream_not_started, state}
end

def handle_call(%{start_stream: topic, timer: milliseconds}, _from, state) do
  {:reply, :ok, Map.put(state, :timer, schedule_work(topic, milliseconds))}
end

# Stream already started? just carry on with the state
def handle_info(%{fetch_tweets: _}, %{stream: _} = state)  do
  {:noreply, state}
end

def handle_info(%{fetch_tweets: topic}, state) do
  parent = self()
  pid = spawn_link fn ->
    configure_extwitter()
    for tweet <- ExTwitter.stream_filter([track: topic], :infinity) do
      send parent, {:tweet, tweet}
    end
  end
  {:noreply, Map.put(state, :stream, pid)}
end

def handle_info({:tweet, tweet}, state) do
  tweets = [tweet|Map.get(state, :tweets, [])]
  {:noreply, Map.put(state, :tweets, tweets)}
end

def handle_info(:purge_tweets, state) do
  schedule_cleanup()
  tweets = Map.get(state, :tweets, [])
  |> Enum.take(@max_keep_tweets)
  {:noreply, Map.put(state, :tweets, tweets), :hibernate}
end

Woah, that was a big hit! Let's break it down again...

1
2
3
4
def handle_call(%{search: topic}, _from, state) do
  tweets = ExTwitter.search(topic)
  {:reply, tweets, Map.put(state, :tweets, tweets)}
end

If you remember our GenServer interface described in part 1, we had a GenServer.call(via_tuple(namespace), %{search: topic}) in the search function. When calling Tweetyodel.Worker.search("my_tweets", "#myelixirstatus") a synchronous message will be sent to our GenServer and handled by this callback.

It's doing nothing special, as you can see we pattern match the map that is used as a payload in the GenServer.call function. We execute a blocking search and we:

1
{:reply, tweets, Map.put(state, :tweets, tweets)}

Which means that we :reply instantly, the tweets that we searched and we "save" in the state of the GenServer the tweets in the GenServer state map that we initialized in init.

That's what Tweetyodel.Worker.search("ma' namespace", "#myelixirstatus") does.

Simple as that.

Let's have a look to another snippet

1
2
3
def handle_call(:entries, _from, state) do
  {:reply, Enum.reverse(Map.get(state, :tweets, [])), state}
end

Again we have a sync blocking GenServer call that sends immediately a :reply. For instance it returns the tweets that are stored in the "belly" of the GenServer. The GenServer's state

1
2
3
def handle_call(%{start_stream: topic, timer: milliseconds}, _from, state) do
  {:reply, :ok, Map.put(state, :timer, schedule_work(topic, milliseconds))}
end

This bit starts to be (maybe! :P) more interesting, since we start a timer and we will decide when the Twitter stream will start.

The schedule_work function calls Process.send_after(self(), %{fetch_tweets: topic}, milliseconds) which will send back to the GenServer a message after some number of milliseconds with the %{fetch_tweets: topic} map so that another callback could handle the timer

Why a timer? I just wanted to play with timers ;) It's a pet project after all no?

The "core" of our GenServer

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
def handle_info(%{fetch_tweets: topic}, state) do
  parent = self()
  pid = spawn_link fn ->
    configure_extwitter()
    for tweet <- ExTwitter.stream_filter([track: topic], :infinity) do
      send parent, {:tweet, tweet}
    end
  end
  {:noreply, Map.put(state, :stream, pid)}
end

Since I wanted a separate process to handle the stream to not block the GenServer process, in this snippet I spawn_link another process linked to the parent

This will "walk through" the possibly infinite stream of tweets and when the stream has data, it will send back to the GenServer a message for every new tweet.
Take a look to the send parent, {:tweet, tweet} statement.

We save also the pid of the stream to interact with it later on.

And voila'

1
2
3
4
def handle_info({:tweet, tweet}, state) do
  tweets = [tweet|Map.get(state, :tweets, [])]
  {:noreply, Map.put(state, :tweets, tweets)}
end

handle_info({:tweet, tweet}, state) matches the messages send by the spawn(ed)_link process and we prepend the tweet to the map state.

Since we are there we also update the state of the GenServer to enable us to query for new entries from the Twitter stream.

And if we get bored of this stream of tweets?!?

Tweetyodel.Worker.stop_stream(namespace) to the rescue!

1
2
3
4
5
def handle_call(:stop_stream, _from, %{stream: stream_pid, tweets: tweets}) do
  ExTwitter.stream_control(stream_pid, :stop)
  Process.exit(stream_pid, :normal)
  {:reply, :ok, %{tweets: tweets}}
end

Which will match the stream_pid from the GenServer state (if present) and the tweets and of course the "invocation" :stop_stream atom.

This particular handle_call will stop the stream and kill the child process that we spawned with spawn_link to start from scratch. It will preserve the GenServer's state although, so that we could fetch entries still.

Infinite tweets!! ROAR

What will happen if we will continue to accumulate tweets? We will become a fat tweety for sure and maybe explode.

1
2
3
4
5
6
def handle_info(:purge_tweets, state) do
  schedule_cleanup()
  tweets = Map.get(state, :tweets, [])
  |> Enum.take(@max_keep_tweets)
  {:noreply, Map.put(state, :tweets, tweets), :hibernate}
end

That's why in init I call schedule_cleanup which, again, uses Process.send_after to schedule :purge_tweets When :purge_tweets is matched, we re-schedule another :purge_tweets and we Enum.take(@max_keep_tweets). For instance, we might want to keep only the last 100 tweets as an example.

I am also replying with the :hibernate atom which forces a full sweep garbage collection until we will receive other activity in the GenServer process.

Ta-da! That's it! There's is a little bit more "boilerplate" in the repository

Thank you for having followed me until now! ;)


Building a Twitter GenServer with ExTwitter part 1

Posted on Thu 08 September 2016 in elixir • Tagged with elixir, twitter, extwitter, genserverLeave a comment

Premise

This article tackles my learning experience in building a GenServer process that talks with Twitter. I am learning the Elixir language in my evenings, so bear with me and please comment if you will find inaccuracies in this article.

I will not comment every line of code of my experiment, however feel free to drop a comment if there's something that triggers your interest.

Objective

I wanted to create a self-contained process (a GenServer in erlang/elixir terms) that would speak to Twitter. The interface of the process should be easy enough to be able to integrate with an application (being web or of another type)
My personal goal would be to integrate with Phoenix channels, however this is not the scope of this article ;)

And... I'll present you the GenServer interface

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
defmodule Tweetyodel.Worker do
  use GenServer

  def start_link(name) do
    GenServer.start_link(__MODULE__, [], name: via_tuple(name))
  end

  def init(_)  do
    schedule_cleanup()
    {:ok, %{}}
  end

  # API

  def start_stream(namespace, topic, timer_milliseconds \\ @start_stream_after) do
    GenServer.call(via_tuple(namespace), %{start_stream: topic, timer: timer_milliseconds})
  end

  def entries(namespace)  do
    GenServer.call(via_tuple(namespace), :entries)
  end

  def stop_stream(namespace) do
    GenServer.call(via_tuple(namespace), :stop_stream)
  end

  def search(namespace, topic) do
    GenServer.call(via_tuple(namespace), %{search: topic})
  end

Let's break it down...

Basic GenServer functions

1
2
defmodule Tweetyodel.Worker do
  use GenServer

Every respected GenServer starts with the macro directive use GenServer.

1
2
3
  def start_link(name) do
    GenServer.start_link(__MODULE__, [], name: via_tuple(name))
  end

To have a fully functional GenServer we have to implement some functions of the GenServer "interface", for instance start_link is implemented in this case, which calls the start_link implementation to start the GenServer by passing to it a __MODULE__ parameter which will expand to the current module name (i.e. Tweetyodel.Worker)

I left the second parameter Args empty, because I do not need it for the moment.

The third argument name: via_tuple(name) is a way to register processes by name in a registry. I will leave it for now here, however a registry helps you to lookup your process by name (you could see it as a sort of namespace). Also by locating your process by name, you can obtain its pid (process id) if needed.

1
2
3
4
  def init(_)  do
    schedule_cleanup()
    {:ok, %{}}
  end

To implement a "fully functional" GenServer in Elixir you have also to specify the init function. It's the place where you want to put all the operations to initialize the GenServer or if you want to open a connection to an external service.

init calls schedule_cleanup() which is a function that will purge the tweets periodically. It has to be called in the init phase to schedule a timer that will reduce the number of tweets in the GenServer state. But what is {:ok, %{}}? It's the tuple returned from the init function of the GenServer to initialize the state of the process.

In this case it's just an empty map which will be populated when using the Tweetyodel.Worker.

The Tweetyodel interface

After having our bare-bone GenServer process implementation, we might want to give to the user of our process something to play with ;) Here is the API of my open source pet project

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
  # API

  # Start a twitter stream after timer_milliseconds
  def start_stream(namespace, topic, timer_milliseconds \\ @start_stream_after) do
    GenServer.call(via_tuple(namespace), %{start_stream: topic, timer: timer_milliseconds})
  end

  # Returns the entries stored inside the `GenServer` process (i.e. the state of the process)
  def entries(namespace)  do
    GenServer.call(via_tuple(namespace), :entries)
  end

  # Stops the Twitter streams and does something more ;) We'll see later on
  def stop_stream(namespace) do
    GenServer.call(via_tuple(namespace), :stop_stream)
  end

  # Executes a direct Twitter search
  def search(namespace, topic) do
    GenServer.call(via_tuple(namespace), %{search: topic})
  end

Without going (yet) into the GenServer callbacks implementation, let's see it in action...

How it works

1
2
3
4
5
6
7
8
{:ok, pid} = Tweetyodel.Worker.Supervisor.start_tweet("tweetyodel")

# Apple has always tweets
Tweetyodel.Worker.start_stream("tweetyodel", "apple")

# Fetch only the the first 5 tweets and their text
# NOTE that pulling data from twitter starts after 10 seconds by default (you can change it although)
Enum.map(Tweetyodel.Worker.entries("tweetyodel"), fn tweet -> tweet.text end) |> Enum.take(5)

Quite simple no? I did not described the Tweetyodel.Worker.Supervisor.start_tweet("tweetyodel") incantation, but just think that this command starts and monitors the life of the GenServer, by killing it and re-starting it if needed. For instance, if an error occurs.

Also you might have noticed that start_tweet has an argument which is tweetyodel. This argument is the namespace (or to put it simply the name) that we use to refer/identify a particular process.

The module used to implement the registry is called gproc, you can find in hex

In short, I can talk to one of my processes just by using a name (the namespace) to refer to it. You could create many processes dynamically with different names by just using another string:

1
{:ok, _} = Tweetyodel.Worker.Supervisor.start_tweet("elixir")

End of part 1 (part 2 will come soon...) It's here !

P.S. The full repository of my pet project is available on github

(Things might change inside the life of the repository... shht don't tell anyone)


How to render slides from spacemacs org mode to reveal.js

Posted on Tue 31 May 2016 in spacemacs • Tagged with spacemacs, revealjs, org-mode, slidesLeave a comment

Steps to follow

  1. Create a spacemacs layer or add to dotspacemacs-additional-packages ox-reveal
  2. SPC f e R to re-scan the packages (so that ox-reveal will be installed from melpa)
  3. Open a file in org mode (I use the deft layer to use org mode and taking notes)
  4. Put as a header #+REVEAL_ROOT: http://cdn.jsdelivr.net/reveal.js/3.0.0/
  5. Use heading with starting with * if you wish to have one slide and no children
  6. Use double heading with ** if you want to have a vertical slide as a branch of the father slide
  7. Hit SPC : and type load-library, then type ox-reveal
  8. To render the slides just hit C-c C-e and select R B to save and preview the slides
  9. Enjoy!

EDIT: Thanks to Glenn Eckstein's comment
In step 1. instead of adding ox-reveal just put after your org entry in the dotspacemacs-configuration-layers section:

1
2
(org :variables
     org-enable-reveal-js-support t)

Configure ERC and ZNC.el with znc on spacemacs

Posted on Fri 25 March 2016 in spacemacs • Tagged with spacemacs, irc, erc, zncLeave a comment

Introduction and goal of today

Spacemacs is my default editor since 7 days and I am really satisfied, so far. I use it for python and elixir currently. I am also taking notes and I discovered the deft layer which fits my needs in note taking. Lately I am also hanging around on the gitter (it has an IRC bridge on https://irc.gitter.im) and the freenode (irc.freenode.net) network

spacemacs is emacs for newbies like me ;) And I know you can do everything in emacs, so I thought:

Why to not set-up an irc client on emacs/spacemacs that connects to a znc bouncer to not lose conversations in irc or gitter? (whew, such an original idea!)

NOTE: I am not covering how to configure znc. Basically you have just to execute znc --makeconf and follow the znc flow ;)

That is where ERC kicks in

ERC is a *comfortable* IRC client in emacs. To have it activate is really simple as:

  • Adding an erc layer to your dotspacemacs-configuration-layers

  • Execute it with SPC a i E (for encrypted mode).

  • Follow the prompts from ERC and be sure to choose port 6697 (the encrypted one)

You can connect to irc.gitter.im grab your token if you want to connect to gitter. (It's the token that you use as a password)

ZNC kicks in too

The way I was able to connect was to add in dotspacemacs-additional-packages znc as a package. I am still not comfortable in creating spacemacs layers, so I am keeping all the additional packages here Reload your spacemacs configuration as usual with SPC f e R and you should have it.

Then you should be able to do M-x customize-group and then type znc to configure znc. The part that has to be customized is ZNC Servers. It should be pretty self-explanatory the interface of this buffer. Just click on the arrow to expand the following menu and operate! ;)

Connecting to znc from ERC finally!

After the set-up of ZNC.el with M-x customize-group you can just type everytime you want to connect just M-x znc-erc or M-x znc-all if you want to connect yo all the networks.

You might get a request to re-enter the password, however I still do not know how to automate fully the connection

That's it! (as usual)

Bonus: emojis

  • Add to your dotspacemacs-configuration-layers emoji

  • Add to your dotspacemacs-additional-packages company-emoji

  • Add to your dotspacemacs/user-config (setq emoji-cheat-sheet-plus-display-mode t)

  • SPC f e R and you should be good!

Emoji support in ERC with autocompletion and display of the emoji!

HINT: do not forget the awesome SPC a i i combo, it cycles through all the chat buffers and returns to the source code or document that you were working on!


Render org journal documents with sphinx theme and spacemacs

Posted on Mon 21 March 2016 in spacemacs • Tagged with spacemacs, org mode, readthedocs, sphinx, readtheorgLeave a comment

Premise

I am an org mode beginner (I am always a beginner, that's why I write blog posts ;)), however I wanted to document how to render nicely my org journal notes

NOTE: org mode for non-spacemacs users is a mode where you can write notes (long or short), todos, with its syntax. You can export, for example to pdf, html, markdown, etc. You have also agendas to organize TODO notes and much more, however it's not the point of this short blog post.

org-journal

org mode has TODO notes, however I wanted to write little journals and long notes, I stumbled upon the org-journal package

The org-journal package is focused on writing time based journals (surprise?)

Just add org-journal in your dotspacemacs-additional-packages I chose to remap the creation of the journal entry with <spc> jc and to search <spc> js Still I have to figure out why <spc> jv behaves like <spc> jc, however I would like to display only the journal.

For non-spacemacs users <spc> is the spacebar key

These are the remappings:

1
2
3
    (spacemacs/set-leader-keys "jc" 'org-journal-new-entry)
    (spacemacs/set-leader-keys "js" 'org-journal-search-forever)
    (spacemacs/set-leader-keys "jv" (kbd "C-u C-c C-j"))

Workflow

With these mappings to create a journal entry and then publish it, you can:

<spc> jc

Write your journal note in org mode

C-c C-e h o

To open in the browser the rendered html from org mode

If you want to render only a subsection of your journal just position your cursor in the subsection and C-c C-e C-s h o

Theme

The rendered document does not look so pretty, Look ma', no css!

https://github.com/fniessen/org-html-themes to the rescue!

Copy one of the themes in your journal folder. In my machine it is $HOME/Documents/journal. I like the readtheorg theme, you can get it here from the repository:

https://github.com/fniessen/org-html-themes/blob/master/setup/theme-readtheorg.setup

On the top of your journal just add:

1
#+SETUPFILE: theme-readtheorg.setup

Re-render with C-c C-e h o to see the preview, enjoy your org mode document with a great theme

You can check http://spacemacs.org/doc/DOCUMENTATION.html to see how it looks!

That's it! Now you have pretty looking document!


Small Docker Images with erlang/elixir

Posted on Fri 18 March 2016 in docker • Tagged with docker, alpine, erlang, elixirLeave a comment

Premise

It is mainly the first time that I am playing with docker

Objective

I wanted to create a docker image from scratch (not using Docker HUB) and I wanted to be small enough. For instance, small enough means less than 100mb, ideally around 50mb with everything that I need

Creating the first docker image from scratch

First I tried to build a minimal debian image trying to use directly debootstrap by doing:

1
2
sudo /usr/sbin/debootstrap jessie ./jessie-chroot
sudo tar -C jessie -c . | docker import - jessie

However it generated an image which was around 250mb

It's since a long time that I do not pay attention to the base disk space of debian, I remembered a less fat base system. I might have a brain fart although ;)

Second try

Using the script provided by docker, I executed:

1
sudo .../mkimage.sh -t $USER/minbase debootstrap --variant=minbase stable

mkimage.sh under debian based distributions should be located here: /usr/share/docker-engine/contrib/mkimage.sh After few minutes (coffee and smoke time) I was pleasantly surprised that the image was cut by a half.

1
beatpanic/minbase   latest              3601280024d7        5 seconds ago       123.6 MB

Most probably because of the --variant=minbase option, however my objective was not yet reached.

Third try

Thanks to a friend of mine (hello @the_mindflayer!) I discovered a Linux distribution called Alpine Linux The basic distribution is based on busybox, so it's quite small. A mere 7mb (I guess it can be smaller).

The beauty of Alpine Linux resides on the fact that it is small for sure, however the thing that caught my eye was the apk package manager, which, contains also.. erlang and elixir!

The docker-engine package contains already an mkimage-alpine.sh script, so I gave it an instant try:

1
sudo /usr/share/docker-engine/contrib/mkimage-alpine.sh

And that's it for this step!

The alpine image is big as 7mb!

1
alpine              latest              a6ad3e89bcdd        22 hours ago        7.206 MB

7mb should be enough for my Objective I thought!

So, let's give it a try in installing erlang plus elixir

Thanks to bitwalker repo I was able to bootstrap quickly my image with his Dockerfile You can check it here https://github.com/bitwalker/alpine-elixir-phoenix/blob/master/Dockerfile However when re-building the image with this Dockerfile it was fat again

"Stripped down" Dockerfile

I decided to remove a bunch of things that I do not need and adapter the Dockerfile of bitwalker in this way:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
FROM alpine:latest
MAINTAINER Fancy Maintaner <fancy@fancy.org>

# Important!  Update this no-op ENV variable when this Dockerfile
# is updated with the current date. It will force refresh of all
# of the base images and things like `apt-get update` won't be using
# old cached versions when the Dockerfile is built.
ENV REFRESHED_AT 2016-03-18
ENV ELIXIR_VERSION 1.2.3
ENV HOME /root

# Install Erlang/Elixir
RUN echo 'http://dl-4.alpinelinux.org/alpine/edge/main' >> /etc/apk/repositories && \
  echo 'http://dl-4.alpinelinux.org/alpine/edge/community' >> /etc/apk/repositories && \
  apk --update add ncurses-libs ca-certificates \
                  erlang erlang-dev erlang-kernel erlang-hipe erlang-compiler \
                  erlang-stdlib erlang-erts erlang-tools erlang-syntax-tools erlang-sasl \
                  erlang-crypto erlang-public-key erlang-ssl erlang-ssh erlang-asn1 erlang-inets \
                  erlang-inets erlang-mnesia erlang-odbc \
                  erlang-erl-interface erlang-parsetools erlang-eunit && \
  wget https://github.com/elixir-lang/elixir/releases/download/v${ELIXIR_VERSION}/Precompiled.zip && \
  mkdir -p /opt/elixir-${ELIXIR_VERSION}/ && \
  unzip Precompiled.zip -d /opt/elixir-${ELIXIR_VERSION}/ && \
  rm Precompiled.zip && \
  rm -rf /var/cache/apk/*

# Add local node module binaries to PATH
ENV PATH $PATH:node_modules/.bin:/opt/elixir-${ELIXIR_VERSION}/bin

# Install Hex+Rebar
RUN mix local.hex --force && \
  mix local.rebar --force

CMD ["/bin/sh"]

Which bumps to the current latest elixir version (1.2.3) and removes python, g++, wget, git, make.

Conclusion

The generated image satisfies my initial Objective since it is a mere 43mb:

1
elixir             latest              cf6bef978aa3        8 hours ago         42.88 MB

Mission complete for now!

1
2
3
4
5
$ docker run -ti d3adb33f iex
Erlang/OTP 18 [erts-7.2.1] [source] [64-bit] [smp:4:4] [async-threads:10] [kernel-poll:false]

Interactive Elixir (1.2.3) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> 

That's it!


Going from vim to spacemacs

Posted on Fri 19 February 2016 in spacemacs • Tagged with vim, spacemacs, awesomeLeave a comment

Why going from vim to spacemacs?

Because you will have vim + emacs, finally in an almost seamless integration! No more editor wars, peace on earth and endless fun for everyone..

For instance, from a less hyped point of view, in my case I needed to have good support for the Elixir language, That is when the great alchemist package from Samuele Tonini kicks in. However this post is not about alchemist, but it tries to focus on a possible vim spacemacs work-flow.

How am I using spacemacs in this very moment?

This is a list of combos that I would like to share (note that the leader key is the blank space, or ):

  • Project outline/tree -> <spc> p t
    Once you are inside a project you can have the outline/tree of the project.

  • grep project -> <spc> /
    This will enable you to grep the project and jump to the file directly

  • Exceptions in tests
    When you will run tests (for example nosetests in python, you will have the possibility to jump to the test that failed with just an enter)

  • search in a file with dynamic display <spc> s s
    Want to search in the current file with live preview? This is your friend

Magit (git client for spacemacs/emacs)

NOTE that when you are in the magit buffer you do not need to <spc> g s to re-enter the magit buffer

  • magit <spc> g s ?
    get help, it will be useful

  • magit <spc> g s (inside the magit buffer) P p
    Push your current branch to a remote (you will be able to choose the remote in an interactive buffer, (origin, collaborator\_remote, friend_remote) )

  • magit <spc> g s (inside the magit buffer) s to stage/unstage the file under the cursor

  • magit <spc> g s (inside the magit buffer) c c to open the commit editor

  • magit <spc> g s (inside the magit buffer) c c (type commit message) ctrl-c ctrl-c

  • <spc> f r (recent files)

For the rest, vim key-bindings, I use basic vim movements and combos and they fit my work-flow. :new, :vsplit, :sb, <ctrl>-w k, <ctrl-w> l works well for me

Thanks to Marco Baringer (segv) to have mentioned spacemacs and to Kai Strempel (MacBethIII) to have guided me through magit and a little bit of spacemacs. It will be an intereresting journey, I am pretty sure!


Vanity domain

Posted on Sat 22 March 2014 in vanity • Tagged with berlin, vanity, beatpanicLeave a comment

BUMP


Vundle, easy VIM plugin management

Posted on Wed 12 December 2012 in vim • Tagged with vundle, vim, editorLeave a comment

Vundle is a VIM plugin manager, it works like a charm and it is approaching version 1.0, now at the moment I have 0.9, but it's pretty stable and totally functional

Vundle enables you (yes, you VIM addict) to install plugin like pathogen but with some spicy search, update, delete capabilities

You just fire up your VIM, do a :BundleSearch NERD for example and you are ready to receive results and install, for example The-NERD-tree by pressing 'i'

Once a Bundle is installed you have to add inside your $HOME/.vimrc a line like:

Bundle 'The-NERD-tree'

and that's all.

Vundle will keep track of the packages and will update them every time you run :BundleUpdate

you can specify also git repos, it's really easy:

Bundle 'git@github.com:davidhalter/jedi-vim.git'

fetches jedi-vim from github automatically and puts the jedi into your $HOME/.vim/bundle

Here is an example of my current .vimrc if you are curious

Enjoy!


Jedi vim a powerful autocompletion for the python addict

Posted on Sun 21 October 2012 in vim • Tagged with vim, python, jediLeave a comment

It happens sometimes to dream about a thing and the internet helps you to realize your little dreams (you lazy worker!)

Since I begun to use vim with python I added a lot of plugins to make my editing experience comfortable like: dynamical syntax checking, flake8 checking, etcetc -- you can have a look at my dirty vim setup if you feel adventurous.

I always missed some kind of advanced and fast autocompletion for python that introspected all the modules/code dynamically.

Thanks to Pycoder's Weekly (check it out at pycoders !) I found jedi-vim

This "small" "xwing" jedi-vim plugin! Using his force, you'll be able to:

  • Autocomplete like a charm
  • Goto
  • Goto definition
  • Call pydoc for any code (with the help of 'K')

Please see the delicious screenshots at jedi-vim -- you will be enlightened (maybe ;))


PostgreSql schema support, PostGIS and Django

Posted on Mon 30 July 2012 in django • Tagged with django schemata, postgresql, postgis, django, schemaLeave a comment

postgis

&

django

I recently searched for postgresql schema support in django. There is a long standing ticket (opened about 4 years) -- it wants to implement generic support for database schemas.

It is surely all nice and dandy, but I needed an "instant" solution. After a bit of research I have found a nice layer (though the author says WORKSFORME :) ) called django schemata

My personal need was to have also postgis support, so... here we go!:

To add postgres schema support you have to:

  • Set your django ENGINE to django_schemata.postgresql_backend

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    DATABASES = {
        'default': {
            'ENGINE': 'django_schemata.postgresql_backend',
            'NAME': 'yourdb',
            'USER': 'postgres',
            'PASSWORD': '',
            'HOST': '',
            'PORT': '',
        }
    }
    
  • Add to the top of MIDDLEWARE_CLASSES:

    1
    2
    3
    4
    MIDDLEWARE_CLASSES = (
        'django_schemata.middleware.SchemataMiddleware',
        [...],
    )
    
  • Configure the SCHEMATA_DOMAINS dict

My needs were to have django in the public schema and all the other stuff related to my host inside a particular schema:

1
2
3
4
5
6
7
8
9
SCHEMATA_DOMAINS = {
    'localhost': { # localhost is for development, you should put the
                   #  hostname/ip instead in production mode
        'schema_name': 'mypersonalschema',
    },
    'django': {
        'schema_name': 'public',
    },
}
  • Django Schemata works with south, so we have to add the two to INSTALLED_APPS:
    1
    2
    3
    4
    5
    6
    7
    8
    INSTALLED_APPS = (
        'django.contrib.gis',
        # Uncomment the next line to enable the admin:
        'django.contrib.admin',
        'geonode.observations',
        'south',
        'django_schemata',
    )
    

NOTE: I also added django.contrib.gis because I need postgis support in Django

I had to specify manually the POSTGIS_VERSION variable which is:

1
POSTGIS_VERSION = '1.5.3'

And finally:

1
2
3
4
5
SOUTH_DATABASE_ADAPTERS = {
    'default': 'south.db.postgresql_psycopg2',
}

ORIGINAL_BACKEND = 'django.contrib.gis.db.backends.postgis'

After your settings.py are in place, you can play with django schemata in this way:

1
2
3
4
5
6
7
$ python ./manage.py manage_schemata # creates the schema
$ export DJANGO_SCHEMATA_DOMAIN="django"
$ python ./manage.py syncdb # to create django stuff in the 'public' schema
$ python ./manage.py sync_schemata --migrate # to create/migrate your
                                             # database inside 
                                             # the 'mypersonalschema'
                                             # schema.

Ok, That's all folks for now!


Soundcloud widget added to the blog with my muzak!

Posted on Wed 14 September 2011 in soundcloud • Tagged with soundcloud, music, beatpanic, linux, renoiseLeave a comment

Just for fun I integrated a little music player after my Github Activity on the right, check it out, this is my try of making something interesting at hobbistical level under GNU/Linux!


OpenQuake 0.4.3 development release is out!

Posted on Fri 09 September 2011 in openquake • Tagged with openquake, floss, earthquakeLeave a comment

start using openquake!

OpenQuake new 0.4.3 dev version released! check it out at openquake blog loads of improvements, db support, new demos and much more features coming!

Join us on #openquake@irc.freenode.net or use freenode's webchat if you are interested in contributing in some way or if you have an elaborate question ask on our development mailing list (not heavy traffic)


Free pelican time

Posted on Wed 07 September 2011 in pelican • Tagged with pelican, githubLeave a comment

share!

So I needed that "important" github activity feature, and in my free time I coded it, pull request made

hope that it will go upstream ;)

if you are curious check it out, simple plugin here is my pull

UPDATE: now it is in pelican's plugins branch, going his way to be integrated in pelican's master


Oops, I did it (again) github activity

Posted on Mon 05 September 2011 in github • Tagged with github, pelican, dirtyLeave a comment

so I cooked up my dirty 'script' and the activity is listed on the right side, have fun! ;)

Still I have to improve the look, but anyways TTFN (TaTaForNow!)


Pelican and github activity

Posted on Mon 05 September 2011 in github • Tagged with github activityLeave a comment

I was searching for a 'blog engine' that supported a github activity stream and I haven't found it. But the nice surprise was to find pelican, a python RST/MarkDown processor with sofisticated templating and a lot of features.

In the next days/week I'll try to cook up a quick script for my github activity and maybe (as time permits) to integrate pelican with http://www.feedparser.org/

Let's see how it goes ;)


Hello world (reprise!)

Posted on Mon 05 September 2011 in hello • Tagged with hello, world, reprise, beatpanicLeave a comment

Classic Hello World post

http://docs.notmyidea.org/alexis/pelican/ rulz