Loading...

Centralised Logging

Logs stored on the AuthStack server itself are not too useful for reporting or reviewing historical data. AuthStack supports reading the Application Log files (The latest 100 lines) within the administration panel. More than 100 lines can cause performance issues, so we recommend that you centralise the logs and stream them to a log collector. The logs can then be processed and analysed in real time, with alerts and reports generated as required.

Configuring Logging Options

The logging configuration for AuthStack Application logs can be found at:

ROOT . config/logging.php

Here is an example configuration, which ships with AuthStack by default. It includes logging to a file and fluentd. Fluentd is a high performance log collector which listens to either a port or a file. It receives logs via a stream or reacts when a file is changed.

return [
    'application' => [
        'file' => [
            'enabled' => true,
            'name' => 'app.log'
        ],
        'verbose' => true,
        'channel' => 'AuthStackApplication',
        'level' => \Monolog\Logger::DEBUG,
        'fluentd' => [
            'enabled' => false,
            'host'  => 'localhost',
            'port'  => 31337,
            'options' => [
                'persistent' => true
            ]
        ]
    ],

    'exception' => [
        'file' => [
            'enabled' => true,
            'name' => 'exception.log'
        ],
        'verbose' => true,
        'channel' => 'AuthStackException',
        'level' => \Monolog\Logger::DEBUG,
        'fluentd' => [
            'enabled' => false,
            'host'  => 'localhost',
            'port'  => 31337,
            'options' => [
                'persistent' => true
            ]
        ]
    ]
];

Installing Fluentd

To install Fluentd, please review the following installation guide.

Configuring Fluentd

First, enable fluentd within the logging.php file.

Change from:

'fluentd' => [
            'enabled' => false,
            'host'  => 'localhost',
            'port'  => 31337,
            'options' => [
                'persistent' => true
            ]
        ]

To:

'fluentd' => [
            'enabled' => true,  // Change here
            'host'  => 'localhost',
            'port'  => 31337,
            'options' => [
                'persistent' => true
            ]
        ]

If you have installed fluentd on another machine, please change the hostname and port as per your configuration. Typically it will be installed on the same machine as AuthStack.


Sending to ElasticSearch

Next, we need to change the td-agent configuration so that fluentd is configured to listen to accept messages sent to it by AuthStack.

Modify the following file:

/etc/td-agent/td-agent.conf

At the bottom of the file, add the following:

### AuthStack Logging Configuration
<source>
    @type forward
    port 31337
</source>

#=======================================================================
# START  Authstack Exception handling block 
#=======================================================================

# Filter for AuthStack exceptions

<filter AuthStackException.*>
  @type record_transformer
  <record>
    hostname "#{Socket.gethostname}"
    @type_name ${record["level_name"]}
  </record>
</filter>

# Handler for AuthStack Exceptions
<match AuthStackException.*>

  @type copy

  <store>
    @type elasticsearch
    logstash_format true
    logstash_prefix authstack_exception
    target_type_key @type_name
    host localhost
    port 9200 
    flush_interval 10s
  </store>

 # Only uncomment this if you are running fluentd on a node not running AuthStack
 #<store>
 #  @type file
 #  path /var/log/authstack/exception
 #</store>

</match>

#=======================================================================
# END  Authstack Exception handling block 
#=======================================================================

#=======================================================================
# START  Authstack Application logging block
#=======================================================================

<filter AuthStackApplication.*>
  @type record_transformer

  <record>
    hostname "#{Socket.gethostname}"
    @type_name ${tag_parts[1]}
  </record>

</filter>

<match AuthStackApplication.*>

  @type copy

  <store>
    @type elasticsearch
    logstash_format true
    logstash_prefix authstack_application
    target_type_key @type_name
    host localhost
    port 9200 
    flush_interval 10s
  </store>

  <store>
    @type file
    path /var/log/authstack/app
  </store>

</match>

#=======================================================================
# END  Authstack Application logging block
#=======================================================================

Replace the variables within strore to reflect your ElasticSearch host and port.

Flush Interval tells fluentd how often to send the logs to the endpoint specified. By sending in batches, rather than real time, the performance is improved.

<store>
    @type elasticsearch
    logstash_format true
    logstash_prefix authstack_exception
    target_type_key @type_name
    host localhost # change this
    port 9200  # change this
    flush_interval 10s  # change this
  </store>

Sending to a file

If AuthStack is setup with multiple nodes and you want to write to a file, rather than another database, such as ElasticSearch, fluentd can be configured to write all node log data to one file, which makes log management more straight forward.

Understanding Record Transformer

We've added @type record_transformer to the filter to allow us to add additional data to the logs before we send them on. It acts as a middleware and data transformation service. In this instance we've added the hostname of the node emitting the logs and given it a specific record type.

hostname "#{Socket.gethostname}"
@type_name ${record["level_name"]}

For more information about record transformer, please review this article.


Previous Article

User Management

We're happy to talk

Our offices are open 8.30am - 7pm GMT, Monday to Friday - but you can always contact us via email. When we receive your email during opening hours, we aim to respond within 30 minutes or less. Should your email reach us out of hours, we will contact you when the office re-opens.

You can contact us using live chat