Friday, December 31, 2021

NGINX was not responding after restart

TL;DR: NGINX was not launching correctly. Since no logs were being written by the process, had to use strace to debug what was going on.

There was a weird thing going on with one of our NGINX servers. The sequence of events was like this:

1. Our server rebooted (after many, many months of uptime). 

2. After reboot, NGINX was running but not responding to requests

Even if I curled localhost like this, nothing happened:

root@amy:/tmp# curl -v http://localhost
* Rebuilt URL to: http://localhost/
* Hostname was NOT found in DNS cache
*   Trying ::1...
* Connected to localhost (::1) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.38.0
> Host: localhost
> Accept: */*

Curl was stacked at that part.

3. Checked error and access logs. Nothing was being written in logs after reboot

That was weird...

4. Did a ps to check if process was running at all. And it was. But realized that no NGINX workers were spawned after launch. How come?

This is what the output looked like:

* Connection #0 to host localhost left intact
root@amy:/tmp# ps aux | grep nginx
root       880  0.0  0.1  43424  5968 ?        Ss   08:40   0:00 nginx: master process /usr/local/nginx/sbin/nginx -g daemon on; master_process on;
root@amy:/tmp# 

5. Looked at journalctl's output to see if anything was going on with the service.

This was all I had:

Dec 31 08:40:11 amy systemd[1]: Stopping A high performance web server and a reverse proxy server...
Dec 31 08:40:11 amy systemd[1]: Stopped A high performance web server and a reverse proxy server.
Dec 31 08:40:15 amy systemd[1]: Starting A high performance web server and a reverse proxy server...
Dec 31 08:40:15 amy systemd[1]: Started A high performance web server and a reverse proxy server.

Nothing else.

6. So I had to resort to heavy machinery: strace

Launched strace attaching it to NGINX's using its PID.

# strace -p 513 -s 10000 -v -f

On a different terminal reloaded NGINX

# systemctl reload nginx

Then strace's output gave me the reason no workers were being spawned.

[pid   844] prctl(PR_SET_DUMPABLE, 1)   = 0
[pid   844] chdir("/tmp/cores")         = -1 ENOENT (No such file or directory)
[pid   844] write(16, "2021/12/31 08:38:08 [alert] 844#0: chdir(\"/tmp/cores\") failed (2: No such file or directory)\n", 93) = 93
[pid   846] fstat(20,  <unfinished ...>
[pid   844] exit_group(2)               = ?
[pid   844] +++ exited with 2 +++

7. Turned out I directory I configured a long time ago to evaluate a SIGSEGV I was having, was deleted on reboot so workers were failing to spawn. After that, created the directory again and NGINX was responding to my requests once again.

====

End of (sad) story. Half an hour I will never get back. 

Happy New Year!



Thursday, October 7, 2021

Librería de código abierto para comunicación con SIFEN

 En el marco de la implementación de SIFEN (Sistema Integrado de Facturación Electrónica), nos juntamos con la empresa TAXit! para liberar como código abierto, una librería en Java (Java 8 en adelante) que facilite la comunicación de los sistemas con la SET.

La librería, fundamentalmente ahorra tiempo en:

* Entender cómo hacer funcionar un cliente HTTP con Client Certificate Authentication para hacer peticiones al sistema de SIFEN

* Entender cómo usar estándares para FIRMA ELECTRÓNICA en documentos XML

* Poder poner información de las facturas en el formato que pide el sistema SIFEN.

Una de las cosas que buscamos con el proyecto es no depender de otras librerías para llevar adelante la tarea, de tal forma a que el footprint de lo que se agregue a un proyecto que la utiliza, no sufra demasiado incremento.

Todo el código está publicado en un repositorio en GitHub, y también, para mayor comodidad, se puede usar MavenCentral para incluir automáticamente las dependencias en proyectos que usan Maven o Gradle para administrarlas. Las colaboraciones son bienvenidas para proveer de más funcionalidades a la librería.

Además, estuvimos en un conversatorio con la gente encargada del proyecto en Tributaciones, para comentar un poco al respecto. Acá la entrevista:




Tuesday, July 20, 2021

¡8 años en Roshka!

El desafío del primer trabajo.

Tenés entusiasmo, ganas de aprender y cierta inteligencia, pero, ¿cómo plasmar eso en una hoja de vida? Aún escribiéndolo, no podés plasmar ninguna garantía. Entonces, ¿qué se necesita? Para mí la respuesta fue: una oportunidad.

Desde aquel día de la entrevista, hasta hoy, 8 años después, la oportunidad es una constante en Roshka, la oportunidad de demostrar lo que puedo hacer, la oportunidad de aprender y la oportunidad de crecer, tanto profesional como personalmente.


Un entorno impecable.


En cada equipo de la empresa en el que pude participar, tuve la misma sensación, estar rodeado de un equipo que en diferentes formas siempre me daba no sólo la oportunidad, sino también el empuje para seguir mejorando, seguir buscando más y aprender día a día algo nuevo. Es un entorno de personas que no sólo tienen un conocimiento espectacular, sino que además no tienen problema alguno en compartirlo contigo, definitivamente todo lo que aprendí se debe que siempre estuve rodeado de un equipo humano impecable. 

Tomar desafíos.

Empezar en desarrollo web front end, backend, luego desarrollo mobile, todas fueron en su momento ideas desafiantes. Creo que lo que me empujó a tomar estos desafíos fueron algunas ideas que siempre están presentes en Roshka: Mejorar constantemente, dar siempre lo mejor, tener confianza y trabajar en equipo.


Siempre se puede aprender más.


Mirando atrás de nuevo, puedo decir que mi camino en Roshka fue siempre un camino de subida, de crecimiento. En este camino, algo que me aprendí bien, es que ¡siempre se puede aprender más!

¡Salud Roshka! Y gracias por la oportunidad y el empuje desde aquel 1 de Julio del 2013 para crecer hasta lo que soy hoy día
.


Thursday, February 11, 2021

Install PGBADGER on a remote server

 PGBADGER is an awesome tool: A fast, PostgreSQL log analysis report.

It works by analyzing PostgreSQL logs, and generating HTML reports that will give you a lot of information on query performance, locking queries, errors, connections, vacuums and pretty much everything you need to analyze a PostgreSQL server.

A (very) simplified PGBADGER installation goes something like this:

Simplified PGBADGER setup

I've seen PGBADGER installations update log analysis daily, hourly, every ten minutes, etc. I usually like to keep my log analyzer as updated as possible.

On small PostgreSQL installations, PGBADGER usually runs on the very same server the PostgreSQL is installed on. This is fine for most use cases.

However, if the PostgreSQL is a busy server (high load) this is not recommended. PGBADGER script, when dealing with huge log files, can be very demanding as expected. So, what I like in those cases is either:

a) Run just once a day, at times when the server is mostly idle (after midnight). This is not always possible, specially if PostgreSQL is serving an application that is run in different timezones.

b) Or, just send the logs periodically to a different server and do the log analysis there (my preferred way).

The installation will be something like this:

So, here is what I usually do (YMMV):

1. Configure PostgreSQL in SERVER A so it will write LOGs that PGBADGER will be able to process. In this example, I am going to rotate and ship PostgreSQL every ten minutes.

2. Install PGBADGER in SERVER B

Make sure there is enough storage to hold PostgreSQL history for at least 36 months. It is usually a good idea to keep history to reflect some possible software or application load changes over time. It saved the day for me more than you would think.

In my case (a mostly DEBIAN/APT based distribution guy) it is just:

# apt install pgbadger
These are the parameters I change in postgresql.conf file:
logging_collector = on
log_directory = '/var/log/postgresql'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_rotation_age = 10min
log_min_duration_statement = 0
log_checkpoints = on
log_connections = on
log_disconnections = on
log_duration = on
log_line_prefix = '%t [%p]: user=%u,db=%d,app=%a,client=%h '
log_lock_waits = on
log_autovacuum_min_duration = 0

3. Configure ssh-key based authentication for ssh access from SERVER A to SERVER B

This is important: you don't have to, but it will make it easier for you if you use postgres user in SERVER A to do this because it already have access to the log files.

These are the steps you need:

3.1 SERVER A - Create a ssh-key for the postgres user.

This will create two files. If you use the default values on a debian based system, the will be:
/var/lib/postgresql/.ssh/id_rsa
/var/lib/postgresql/.ssh/id_rsa.pub
3.2 SERVER B - Create the user that will own the log files and run the PGBADGER commands

Since the analyzed files reports will be HTML files, it will become very handy if this user has access to a directory that can be exposed through a WebServer (Apache, NGINX, IIS, or any other you use).

In this user's home, if it does not already exist, create this file:
/home/username/.ssh/authorized_keys
Two important things here:

a) /home/username is just an example here. Make sure you use the home directory for the created user.
b) this file, and the directory containing it MUST REVOKE ANY TYPE OF ACCESS to GROUP and OTHERs so make sure permissions are correct for it.

If you need to, change file permissions like this:

chmod 600 ~/.ssh/authorized_keys
In this created or existing file, append the contents of file id_rsa.pub that was created in step 3.1.

3.3 SERVER A -  Check that you can login on SERVER B without using password.

This is easy. Logged in as the postgres user make sure you can ssh access ERVER B without typing a password.

SERVER A> $ ssh username@SERVERB
SERVER B> $ echo I am in in SERVER B and I did not type a password to login
4. SERVER B - Create a script to run PGBADGER remotely

File name should be: /home/username/bin/run-pgbadger.sh

#!/bin/bash

LOGS_DIR=/home/username/pgbadger/tmp_files
PGBADGER_HTML_DIR=/var/www/html/pgbadger/
NJOBS=10

echo "Unzipping log files"
gunzip -v $LOGS_DIR/*.log.gz
echo "Processing pgbadger with $NJOBS jobs"
pgbadger -j $NJOBS -I --outdir $PGBADGER_HTML_DIR $LOGS_DIR/*.log
echo "Removing log files"
rm -v $LOGS_DIR/*.log
echo "DONE"

And make sure it has EXECUTION permissions.

$ chmod 750 /home/user/bin/run-pgbadger.sh

Important script variables:

a) LOGS_DIR: where log files are going to be temporarily stored when shipped from SERVER A
b) PGBADGER_HTML: where analysis result with be incrementally calculated and stored. It will be useful if this directory is accesible through a WEB SERVER. Example: http://server-b/pgbadger
c) NJOBS: how many jobs will process the analyzed files in parallel. 

5. SERVER A - Create a log shipping script to run on SERVER A every 10 minutes

File name should be: /var/lib/postgresql/bin/ship-logs-to-pgbadger.sh

#!/bin/bash
LOGS_DIR=/var/log/postgresql
REMOTE_DIR=/home/username/pgbadger/tmp_files
REMOTE_PROCESS=/home/username/bin/run-pgbadger.sh
SERVER_B=username@SERVERB

echo "Compressing files"
find $LOGS_DIR -cmin +1 -exec gzip -v {} \;
echo "Sending files and removing them afterwards"
rsync --remove-source-files -av $LOGS_DIR/*.log.gz $SERVER_B:$REMOTE_DIR
echo "Executing remotely PGBADGER"
ssh $SERVER_B "$REMOTE_PROCESS"
echo "Done"
Important script variables:

a) LOGS_DIR: where PostgreSQL's log files are stored
b) REMOTE_DIR: this is SERVER B's directory that will have the log files temporarily (MUST match what you've configured in step 4).
c) REMOTE_PROCESS: this is SERVER B's script that will be executed after log files ship (MUST match what you've configured in step 4).
d) SERVER_B: username and server address to access remotely (MUST match what you've configured in step 3).

6. SERVER A - Manually run script from step 5 to see if everything is working

/var/lib/postgresql/bin/ship-logs-to-pgbadger.sh

After running, if everything is OK (it might take a while the first time) you can point your browser to:

http://server-b/pgbadger

And PGBADGER reports should be good to go.





7. SERVER A - Configure a cron task to execute the script every 10 minutes and run PGBADGER analysis tool 

*/10 * * * * /var/lib/postgresql/bin/ship-logs-to-pgbadger.sh


That is all. Enjoy!

Thursday, November 22, 2018

Using Apache 2.4 Reverse Proxy to some (but not all) Tomcat/JavaEE Applications

Usually if you have a couple of Tomcat WAR's (Java WEB applications) deployed on a Tomcat Servlet Container, you a couple of options when exposing those applications through an Apache (or NGINX) http server.

Server Setup is laid out in the following pic:

Server Layout


For the sake of this discussion, let's assume we have these WAR applications deployed in Tomcat server (besides ROOT and manager default applications):

* users.war     -> tomcat context /users
* webapp.war    -> tomcat context /webapp
* discovery.war -> tomcat context /discovery
* bingo.war     -> tomcat context /bingo
* clubs.war     -> tomcat context /clubs
  

Option 1)

Just do a full reverse proxy from the Apache Server to the Tomcat Server (you could either use AJP or HTTP connectors for that purpose).

Apache Configuration will be something like this:

    <Location /tomcat/>
        ProxyPass ajp://192.168.0.2:8080/
        ProxyPassReverse https://example.com/tomcat/
    </Location>

This is very easy to use and setup. It just takes to add these lines to a virtual host and done!

You could access the tomcat Applications to the following URLs:

* http://example.com/tomcat/users
http://example.com/tomcat/webapp
http://example.com/tomcat/discovery
http://example.com/tomcat/bingo
http://example.com/tomcat/clubs

The problem with this approach is:

a. You have access to ALL APPLICATIONS (even if you don't want to)
b. You have access to default applications (if they're not removed) such as Tomcat's ROOT and Manager.

Option 2)

Do one reverse proxy location for each application you want to expose. Let's say you want to expose `users`, `webapp` and `clubs`. Your configuration will be something like this:

    <Location /tomcat/users/>
        ProxyPass ajp://192.168.0.2:8080/users/
        ProxyPassReverse https://example.com/tomcat/users/
    </Location>
    <Location /tomcat/webapp/>
        ProxyPass ajp://192.168.0.2:8080/webapps/
        ProxyPassReverse https://example.com/tomcat/webapp/
    </Location>
    <Location /tomcat/clubs/>
        ProxyPass ajp://192.168.0.2:8080/clubs/
        ProxyPassReverse https://example.com/tomcat/clubs/
    </Location>

The problem with this approach is obvious:

a. If you deployed 25 WARs in your tomcat and want to expose 18 of them, your Apache configuration will grow, and will become harder to maintain.
b. COPY+PASTE error might occur more frequently when adding new applications
c. If your Location sections contains more configuration options (RequestHeader directives, Require, Auth Type) then your COPY+PASTE gets bigger every time, and if you need to change/add something, you will need to do it in every single configuration you have done before.


Option 3)

My favorite option nowadays.

If you want the three same applications exposed, you can use LocationMatch and ProxyPassMatch directives to get the best of both previous options:

    <LocationMatch "^/tomcat/(?<apiurl>(users|webapp|clubs)/.*)">
        ProxyPassMatch ajp://192.168.0.2:8009/$1
    </LocationMatch>

You use a regular expression to filter what domains will go through to tomcat server, and you keep it only in one configuration.

Monday, December 18, 2017

Video Roshka 2017

La ansiedad para ver el video de Fin de Año de la Roshka es cada vez mayor! 
Tanto los clientes, amigos y roshkeros ya venían preguntando desde hace meses sobre el tema del video 2017.

Este año, optamos por apoyar lo nuestro, por lo que decidimos utilizar solamente músicas de grupos nacionales.  La consigna para el video, fue juntarnos en grupos para preparar un pequeño video LIBRE con la única condición de la música nacional.




El resultado fue sumamente gratificante! La onda que le ponen los roshkeros a los videos, a pesar de que estamos en constantes "hornos" (término que utilizamos en la Roshka para referirnos a que estamos con mucho, mucho, MUCHO trabajo) es increíble y con esto se refleja una vez más que estamos en el #bestPlaceToWork




Entre los temas seleccionados, tenemos músicas de varios grupos como Kchiporros, Eeeks, Quemil Yambay, Revolber, Flou, La kchorra, Kitapena, Viernes 13, Pipa para Tabaco, Bohemi Urbana, Paico, Los verduleros, Area 69, entre otros.

El final, los filmamos en el patio como siempre, con un grupo de roshkeros con la música Negrita de los Kchiporros, luego de mucho ensayo!


Podés ver el video en este link:




Links para ver videos de los años anteriores:





Monday, December 26, 2016

Video Roshka 2016!

Luego de prepararnos todo el año… Hoy lanzamos el VideoRoshka 2016!

Hicimos un mix de los videos virales más famosos en la web: Harlem Shake, Mannequin Challenge, Stop motion, Turn down for what, Planking, Lip dub, Carpool, Ice bucket challenge, Time lapse, Savage Level, Flashmob y Expectativa vs. Realidad.

A esto, le agregamos unos extras como un tráiler de Roshka Run (Actividad semanal de corrida/caminata de la Roshka) y el famoso final en el patio como todos los años.

El resultado superó las expectativas ampliamente… Dejamos en 16 minutos una muestra de que estamos en el #bestPlaceToWork

Agradecemos especialmente a la Ministra Marecos y a Roshka Studios por toda la ayuda!

Por último, cabe mencionar que en el video se hace continuidad al Ice Bucket Challenge por lo que tendremos videos todo el 2017!


Esperamos que les guste y… Jaumina!  

Ver video