Initial Cuntoo Testing

January 8th, 2019

I recently grabbed an old Lenovo x611 from ebay with hopes of turning it into a proper republican workstation. As of writing this I am still working on primarily2 Debian boxen and have yet to fully complete a Gentoo install on my own iron. With my need for a sane os mounting, I figured this would be a great time to test out the latest result of Trinque's ongoing yak shaving known as Cuntoo.

The following is the results of 3 separate runs of the bootsrapper.sh. The target block device for each run was a SanDisk 32gb usb flash drive, and the os running the script was Debian 8. My aim was to first create a bootable Cuntoo on the usb stick, boot the Lenovo from the stick, and then obliterate its hdd with Cuntoo.

First Run - vpatch (keccak)

For my first run I used the 4.9.95-apu2 kernel config included in the cuntoo.tar.  Script ran without issue (I let it run overnight and woke up to the final prompts), however it turns out that the x61 needed another kernel. Trying to boot from the first run usb stick would result in it hanging indefinitely at the "BIOS data check successful" messaging.

cuntoo first run: loading stuck on 'BIOS data check successful'

Second Run - vpatch (sha)

After a little research I found a kernel config for the x61 (mirrored here). Script once again ran without problems, and this time it booted!

However, after multiple tries of entering what I know was the root password I set, I still was met with the "Login Incorrect" message.

cuntoo second and third runs: cannot login with either root or main user

Frustrated, I began the usual3 dance of mounting the block device, chrooting in, diddling with passwd and/or the /etc/passwd , /etc/shadow file, etc. Sadly, while educative, none of my password futzing bore any fruit so I turned my eyes to other areas.

One weird thing I noticed while poking and prodding was that some of my permissions looked a little off:

-rws--x--x 1 root root 33104 Jan  6 09:39 bin/su

At this point I was reluctant to burn too much more time identifying and enumerating screwy permissions. I decided to throw in the towel on the second run and rerun on my third usb flash drive, just for good measure.

Third Run vpatch (sha)

Third run had results identical to the second run. Meh.

So my original aims did not quite pan out, but what frustrates me more is that I'm not sure why just yet. Was it Debian strange? Perhaps just some elementary config I missed? Not sure at the moment, but on the positive side I learned a good chunk in the process4. I also have retained the Cuntoos installed on my second and third runs, so further prodding can continue down the road at least.

For now I may just try installing a Gentoo on the x61 with the hope that it teaches me a few more things I can use in later tests, as well as giving me an actual Gentoo to test from in order to rule out debianstrange. Anyways, I look forward to more Cuntoo testing in the future and perhaps a saner workstation.

  1. sadly, I couldn't find any of the 64bit x60s in working order []
  2. save for my spiffy Pizarro Rockchip []
  3. Well, usual to others versed in linux, I suppose. To me it was a very enlightening experience of just how damned easy it is to pop an os once you have physical access! []
  4. For e.g. after all of my troubleshooting I'm now quite comfortable with chroot! []

Conveyor Outlook: Now to Feb 2019

November 12th, 2018

For both my own sanity and for continued communication re: what I'm working on, I will outline what I aim to accomplish by the end of February 2019:

  1. Introduce auto-bid functionality into auctionbot. As of now, lobbesbot is the only bot with auto-bid capabilities and, imho, I did not design nor implement that functionality correctly to begin with1 and need to redo it properly.
  2. Discontinue legacy !Qauction functionality in lobbesbot.
  3. Migrate all legacy auction data2 from lobbesbot into auctionbot
  4. Implement the old-but-improved billing3

Items most-likely on conveyor for after March 2019:

  • A more automated price history for auctions and a "ticker" functionality for auctionbot
  • Redo of the remainder of lobbesbot functionality to also sit atop logbot
  • Auxiliary logotrons for #pizarro y #eulora
  1. some related mini-threads: http://logs.minigame.biz/2018-07-16.log.html#t18:58:48http://logs.minigame.biz/2018-07-18.log.html#t18:04:10 []
  2. For the curious: 366 auctions dating back to Feb 2017 []
  3. I'ma need to retool the automated back-end bits to work with the new auctionbot database. Also need to think through how I want to report usage now that I'm only billing at a certain threshold []

Bulletin on auctionbot fees

November 4th, 2018

Some of the following may or may not be news to everyone. Nevertheless, I figure this deserves a proper post. Going forward auctionbot fees will be handled as such:

  • 20 ecu (2 satoshi) per auction hour charged to the creator of that auction1 for auctions ending with a sale
  • 100 ecu (10 satoshi) per auction hour charged to the creator of that auction for auctions ending without sale
  • Quarterly usage will still be published and deeded as before, however the creator of the auction will not receive an invoice until the amount of unpaid fees hits a threshold of 1mn ecu (0.001 BTC).2

Why have fees again?

For those that may not be aware, auctionbot fees originated as a way to discourage spam in #eulora:

mircea_popescu lobbes and by the way i'm all for your monetizing this. charge people 200 ecu / hour for successful auctions and 1k ecu/hour for auctions that end with no bids to cover for the cost of spamming the channel, and you can keep teh proceeds.

In other words, since the bot lists all active auctions in-channel every X hours, the fees exist to prevent large masses of 'junk' auctions from cluttering the logs.

Why are the fees at the rate they are today?

In late 2017 the ecu floated to 1 ecu = 0.1 satoshi3. After some discussion in the logs the auctionbot fees were lowered to the 20ecu/100ecu per hour rate stated at the top of this post.

I will continue to keep the fees relatively low.

Why is there a threshold of 0.001 BTC before collecting?

I've learned in the last year that running around each quarter and billing/collecting on what were sometimes very small sums4 was a tax on my time.

Perhaps more importantly, I do not want to discourage usage of the bot now that it has been seeing broader usage across the Republic. I believe a bill-at-threshold approach still serves the original purpose of discouraging spam, while at the same time not discouraging usage.

Of course, I will always welcome discussion on the matter.

  1. buy or sell []
  2. To put in perspective: assuming all your auctions ended in a sale, you would need to run 50,000 hours worth of auctions to see a bill. This means for users that only create, say, twelve 120-hour auctions per year, assuming they all ended in a sale it would take them ~34 years to see a bill (This, of course, assuming that fees remain unchanged.) []
  3. yes, 0.000000001 BTC, or a tenth of a satoshi []
  4. we're talking figures like 0.0000004 BTC []

auctionbot is live

October 14th, 2018

I'm pleased to report that auctionbot is now live and sitting in #trilema, #eulora, #pizarro, and #trilema-lobbes. Here's a quick run-down on usage:

Call command: !X1

Commands:

!Xsell opening(ecu) duration(hours as integer) item/lot

Create an auction/order selling item/lot opening at opening(ecu), running for duration(hours as integer) hours

e.g. !Xsell 3.5mn 72 One bag of entropy

This creates a typical auction, and functions the same as the classic !Qauction from the old bot. The creator of the auction is selling some thing with potential buyers bidding the price up.

!Xbuy opening(ecu) duration(hours as integer) item/lot

Create an auction/order buying item/lot opening at opening(ecu), running for duration(hours as integer) hours

e.g. !Xbuy 3.5mn 72 One bag of entropy

This essentially creates a reverse auction. The creator of the auction is buying some thing with potential sellers bidding the price down.

!Xbid order-number amount(ecu)

Bid amount(ecu) on auction specified by order-number

e.g. !Xbid 1004 1.5bn

!Xcancel order-number

Cancel an auction that you2 created specified by order-number

e.g. !Xcancel 1003

!Xview order-number

View details on a past or current auction by order-number

e.g. !Xview 1003

!Xlist

List all active auctions. Note: If there are no active auctions the bot will say nothing.

!Xmybids

List all active auctions for which you have made a bid. (Only accepted via PM)

!Xhelp

Returns the url for this page

!Xping

Ping the bot

!Xautobid

Coming3 soon...

Assorted Likbez:

All dealings are in ecu.

Currently, 1 billion ecu = 1 btc

How to read the output of !Xlist

Take this output, for e.g.

auctionbot B#1009 O=500mn LB=499.99mn E=2018-12-07 06:34:14.465343 (16h47) >>> 2k wFF
auctionbot S#1010 O=1.08mn LB=1.13353mn E=2018-12-08 08:30:05.823023 (42h43) >>> 18096 NT q60
auctionbot S#1011 O=1.05mn LB=None E=2018-12-08 09:25:26.658053 (43h39) >>> 29873 BN q43
auctionbot --- end of auction list, 501.124mn total bids ---

'B#' or 'S#' denotes if it is a buy or sell order, followed by the order number

'O' is the opening bid, 'LB' denotes the current lead bid, and 'E' is the end date with the time remaining in hours/minutes

Everything after the ">>>" is what is being sold.

If you are unfamiliar with Euloran auctions, you may be thrown by cryptic combinations such as "BN q43" or "wFF". Note that while the bot will accept whatever strings entered for the item/lot when creating an order, nevertheless, conventions in usage emerge over time.  In the former case above, the "BN" denotes an item in the game, and the "q43" denotes the quality. In the latter case, the "wFF" stands for Wired Filthy Fiats, and has recently seen usage in Pizarro auctions.4

  1. case-sensitive []
  2. the irc nick issuing the !Xcancel []
  3. again []
  4. The ideal situation would be, of course, if I can eventually harness these conventions in order to automate price-charting. []

Seeking Forum Input: Auctionbot's Currency - btc vs ecu

September 9th, 2018

UPDATE: See thread (http://btcbase.org/log/2018-09-09#1848940). Question answered. Input closed. Bot will deal 100% in ecu.

=============================

Now that I've released the command router for logbot, I've turned my attention to really trying to think through the (re)design of auctionbot. I'm currently at the point where I need to make a couple of design decisions and would like to field some input from the forum.

Question 1: Should the auctionbot deal in btc or ecu? (e.g. "!Xbid [ecu amount]" vs "!Xbid [btc amount]")

Question 2: Should the auctionbot accept bid/order amounts in either currency?1

I'm current thinking is that:

Answer to 1) I will have the thing deal in btc as opposed to ecu

Reasoning being that I need to pick one currency, and trb is the currency of the republic

Answer to 2) I will not offer any automatic btc-to-ecu conversion features.

Reasoning being I don't want to be in business of curating conversion rates for this. I'm thinking of cases like: I'm vacationing on the moon for a month. During that time ecu to btc rate changes but the bot isn't updated to reflect this change. People don't notice and chaos ensues2

Does this sound like solid reasoning? And would eulora players hate to enter their auctions in btc vs ecu?

  1. i.e. the bot converts input/output where needed to/from the currency chosen in  question 1 []
  2. e.g. someone auctions off something for their intended amount, but bot incorrectly converts it into some other amount. I'm then stuck having to unwind data after-the-fact []

logbot_command_router_python: Genesis

September 2nd, 2018

Special thanks to ben_vulpes for helping me understand the fundamental bits of the design that this specific implementation was based on.

What is this?

This is a package of .py scripts designed to 1) hook into a PostgreSQL db that is being used by a running instance of logbot or logbot-multiple-channels-corrected in order to 2) act as a configurable and extendable 'bot command router'

Where is this?

logbot_command_router_python_genesis.vpatch

logbot_command_router_python_genesis.vpatch.lobbes.sig

This:

From INSTALL:

Install the following via your preferred method: 

logbot-genesis or logbot-multiple-channels-corrected (http://btcbase.org/patches?patchset=bot&search=)
psycopg2 (a PostgreSQL adapter for Python: http://initd.org/psycopg/)

Press logbot_command_router_python via your preferred V:

mkdir -p ~/src/logbot_command_router_python
cd ~/src/logbot_command_router_python

mkdir .wot
cd .wot && wget http://www.lobbesblog.com/lobbes.asc && cd ..

v.pl init http://www.lobbesblog.com/src/logbot_command_router_python
v.pl press logbot_command_router_python logbot_command_router_python_genesis.vpatch

From README:

HOW TO USE:

Bits designed to be customized to your needs:
knobs/config.py << Edit you postgres db connection info, bot command prefix, etc.
knobs/router.py << The central command router. Write or pull in your custom commands here.
commands/* << A directory to house command scripts that you can import into router.py

To run:
Configure your bits mentioned above
Start your local postgres server and your instance of logbot or logbot-multiple-channels-corrected
./main.py

CAVEATS:

Designed to communicate with a LOCAL Postgres server only, not remote. (Feel free to extend this to add that capability, however)

Auctionbot ETA and Status Report

July 28th, 2018

This post will serve to provide communication on the overall status of Auctionbot. I will update periodically as I see fit.

Current ETA1 : October 31st, 2018

Steps to Fruition:

  1. [Complete] Spin up instance of logbot-multiple-channels-corrected and confirm functionality
  2. [Complete] Successful test run of an external .py script communicating on IRC via logbot and the listen/notify mechanisms of postgres. Baked the polished tests into a logbot-command-router-genesis
  3. [Complete] Re-design and re-implement auctionbot as an extension of logbot-command-router-genesis, leveraging the legacy2 auctionbot coad where possible. It will include the reverse auction functionality specified in the logs.
  4. [Complete]Test.
  5. [Complete]Release into production

-

  1. I will announce in #trilema if this date is pushed out []
  2. i.e. the auction 'plugin' I wrote for lobbesbot []

logbot-multiple-channels-corrected on Gentoo: Tips n' Tricks for the Uninitiated

July 20th, 2018

Recently I stood-up an instance of logbot-multiple-channels-corrected on my Pizarro Rockchip.1 As such, I accumulated a large pile of notes that I cobbled into this list of "tips n' tricks". While I'm no talking paperclip, I hope this will be of use to folk in the audience who may be new to any or all of: Gentoo, v, postgresql, lisp, whathaveyou.

We will be covering the following:

  • sbcl & Quicklisp
  • PostgreSQL (with uuid USE flag)
  • ircbot-genesis & ircbot-multiple-channels-corrected
  • logbot-genesis & logbot-multiple-channels-corrected

sbcl & Quicklisp

Both are needed and both are pretty straightforward installation-wise.2 As such, I'm just going to provide a pointer to the Quicklisp installation page and leave it at that.

PostgreSQL (with uuid USE flag)

> As the parenthetical in the above heading suggests, you need to set your USE flag for "uuid" before you emerge Postgres as the logbot.sql you will run later requires the uuid-ossp module.

> Also be wary of your PYTHON_SINGLE_TARGET flag when emerging3 as mine defaulted to 3_5. You can set your PYTHON_SINGLE_TARGET via your package.use:

dev-db/postgresql PYTHON_SINGLE_TARGET: -* python2_7

> While I think this depends on -how- you install, my installation4 created a database cluster with three databases: template0, template1, and postgres. From what I've learned, the 'postgres' database (owned by the 'postgres' superuser created by the installation) is meant to be used as a general purpose database. The 'template1'  database is used as a template when creating new databases, and 'template0' is primarily used to restore template1 in the event you take a dump on it.

> Once installed, your logs and config files will be located somewhere like here:

configs:

/etc/postgresql-9.5/postgresql.conf

logs:

/var/lib/postgresql/9.5/data/postmaster.log

> For postgres-9.5 at least, this is how to start/stop the server:

As root:

/etc/init.d/postgresql-9.5 start

/etc/init.d/postgresql-9.5 stop

> Connecting to the default 'postgres' database (as 'postgres' user):

pizrk003 portage # su postgres -c 'psql'

psql (9.5.12)

Type "help" for help.

postgres=#

> Link to the docs: https://www.postgresql.org/docs/

ircbot-genesis & ircbot-multiple-channels-corrected

> For ircbot-genesis, follow the INSTALL steps on btcbase.org/patches as it will take you most of the way. I also needed to load Quicklisp before 'cl-irc' in the SBCL REPL:

(load "~/quicklisp/setup.lisp")

(ql:quickload :cl-irc)

> Also be on the lookout for a rogue "robots.txt" file that ends up in the .seals directory on the 'init' step of the INSTALL. If you get the following barf on press5 then just remove it from .seals:

lobbes@pizrk003 ~/src/ircbot $ ~/v/v.pl press ircbot-genesis ircbot-genesis.vpatch

----------------------------------------------------------------------------------

WARNING: robots.txt is an INVALID seal for ircbot-genesis.vpatch!

Check that this user is in your WoT.

Otherwise remove the invalid seal from your SEALS directory.

----------------------------------------------------------------------------------

Died at /home/lobbes/v/v.pl line 594.

lobbes@pizrk003 ~/src/ircbot $ ls /home/lobbes/src/ircbot/.seals

ircbot-genesis.vpatch.trinque.sig  robots.txt

lobbes@pizrk003 ~/src/ircbot $ rm /home/lobbes/src/ircbot/.seals/robots.txt

> For ircbot-multiple-channels-corrected, you can do something like:

cd ~/src/ircbot/.wot &&

wget -O ben_vulpes.asc http://wot.deedbot.org/4F7907942CA8B89B01E25A762AFA1A9FD2D031DA.asc

cd ../patches &&

wget -O ircbot-multiple-channels-corrected.vpatch http://btcbase.org/patches/ircbot-multiple-channels-corrected/file

cd ../.seals &&

wget -O ircbot-multiple-channels-corrected.vpatch.ben_vulpes.sig http://btcbase.org/patches/ircbot-multiple-channels-corrected/seal/ben_vulpes

cd ../ &&

v.pl press ircbot-multiple-channels-corrected ircbot-multiple-channels-corrected.vpatch

> Remember to update your symlink for quicklisp:

ln -sfn ~/src/ircbot/ircbot-multiple-channels-corrected ~/quicklisp/local-projects/ircbot

logbot-genesis & logbot-multiple-channels-corrected

The steps for logbot-genesis and logbot-multiple-channels-corrected are similar to ircbot, with the obvious differences relating to interfacing with postgresql.

> You'll want to edit your USAGE to look something like this6. (Note: If you are connecting to the postgres database, you do not specify a pw. In which case, leave that blank in your USAGE file):

(asdf:load-system :logbot)

(defvar *bot*)

(setf *bot*

(logbot:make-logbot

"chat.freenode.net" 6667 "nick" "password"

'("#channel1" "#channel2")

'("db-name" "db-user" "db-password" "db-host")))

; connect in separate thread, returning thread

(logbot:ircbot-connect-thread *bot*)

; or connect using the current thread

; (logbot:ircbot-connect *bot*)

> Similarly to ircbot, don't forget to update your quicklisp symlinks to point to logbot-multiple-channels-corrected:

ln -sfn ~/src/logbot/logbot-multiple-channels-corrected ~/quicklisp/local-projects/logbot

> To start the bot, you'll want to load the following from the SBCL REPL:

(load "/[path]/[to]/quicklisp/setup.lisp")

(ql:quickload :cl-irc)

(ql:quickload :cl-postgres)

(ql:quickload :postmodern)

(load "[Your configured USAGE file referenced above]")

> Checking your LOG table to see logged IRC lines:

pizrk003 lisp # su postgres -c 'psql'

psql (9.5.12)

Type "help" for help.

postgres=# SELECT * FROM LOG;

id                  |    target    |  message   |      host       | source |  user   |        received_at

--------------------------------------+--------------+------------+-----------------+--------+---------+----------------------------

8061cf9a-1e0b-4a3c-a0d9-837afa8cae55 | #lobbestest  | test       | 192.121.170.137 | lobbes | ~lobbes | 2018-07-16 03:05:06.698238

ab8fc726-8c46-4495-b571-64789de02826 | #lobbestest  | test five  | 192.121.170.137 | lobbes | ~lobbes | 2018-07-16 03:05:11.140429

21f03cb6-3322-4332-af85-133d742113a0 | #lobbestest  | test 1234  | 192.121.170.137 | lobbes | ~lobbes | 2018-07-16 03:30:56.743491

b6d7fa3f-1e84-4fde-9e4d-b3dd7e2b8d32 | #lobbestest2 | test'      | 192.121.170.137 | lobbes | ~lobbes | 2018-07-16 05:49:53.669309

cd48a6c0-dc8e-449e-bc68-5a3aabc16c48 | #lobbestest  | test       | 192.121.170.137 | lobbes | ~lobbes | 2018-07-16 05:49:57.800388

2e7c2bb6-e9c4-41fb-a629-1a0f5e1682ba | #lobbestest2 | test again | 192.121.170.137 | lobbes | ~lobbes | 2018-07-16 05:56:07.579301

96b6d216-44e5-4b4d-99b8-c136fb538b27 | #lobbestest  | test niaga | 192.121.170.137 | lobbes | ~lobbes | 2018-07-16 05:56:30.749328

  1.  "Auctionbot 2.0" will sit on top of this. Look soon for a guide on feeding commands to/from logbot-multiple-channels-corrected via postgres triggers []
  2. actually, I think my P-rockchip came with sbcl already emerged []
  3. I've quickly learned that the "-a" option is your friend when it comes to "emerge" []
  4. which was via portage using: a package mask on ">dev-db/postgresql-9.5.12"; "postgresql" and "uuid" USE flags in "make.conf"; then "emerge -av dev-db/postgresql" []
  5. this barf, of course, may look different depending on your V []
  6. pardon the lack of indents; blog formatting woes []

Sales Report: June 2018

July 7th, 2018

Since being reminded that proper reporting is a primary element of the action/report/discuss/adjust cycle, I figure I will start releasing monthly reports of my actions towards building sales for Pizarro ISP. And with that short intro, here is the June report which documents the actions I have taken since being given the green light to begin sales on June 3rd. :

Sales brought in this month1

0

Summary of Activity

Since I had to start somewhere, I figured I'd survey the layout of two corners of the vast refuse pile: the "webhosting forums" and the "bitcoin forums".

As it turns out, a good chunk of the webhosting forums out there have certain thresholds2 that need to be met in order to post "advertisements" on their designated advertisement boards. I targeted what usgoogle claims is the most relevant result for the query of "web hosting forums"3 and set off to reach that forum's threshold of "10 non-fluff" posts. While seemingly simple, this proved to be an expensive4 exercise; eating through minutes and brain cells as I stepped on "gotcha" landmines such as "too fluffy a reply, post deleted" and "posts in X board don't count toward post count". Even after hitting the threshold and posting my ad, I stepped on another landmine of "post deleted due to incomplete public whois information"

As you might expect, I was met with no such "threshold to post ads" on the various "bitcoin forums" and was able to post ads immediately upon account creation5. I also got two kinda-leads from the post on bitcointalk, which I pursued to no results6

If any are interested, I am keeping a running list of the ads I have here: http://blog.lobbesblog.com/list-of-pizarro-ads/

Takeaway

In my eyes, I utterly have failed and flailed thus far. Not much to show here, and it shows. Still, I got some good direction from the forum, both directly and indirectly7. My plan of attack for this next month:

  • Utilize tracking on my ads so as to get proper measurements. I have already retrofitted my existing ads with links pointing to variations of the url of my custom landing page
  • Look for additional avenues outside of the "web hosting forums".
  • Automation: Explore ways to construct a mass-mailer of sorts; figure out the proper design of a trawler8

Onward.


  1. and year-to-date []
  2. mostly "N post-count required" []
  3. at the time of this writing this is "www.webhostingtalk.com" []
  4. considering this is effort expended to post a single thread that will be seen by maybe.. ~20 people before sinking to the 2nd page of roughly 100 entries in a digital phonebook that I suspect nobody consults []
  5. granted, for tardstalk I didn't have to start 'fresh' as I had an old account that I simply dusted off []
  6. perhaps opening with a sales pitch on that second lead was a bad idea. []
  7. as oft the case []
  8. and where to trawl []

Arming your Arm64 RockChip Gentoo against the hordes of Mindless Bots

July 1st, 2018

About a week ago I was notified in #trilema that my blog was unreachable. After being convinced to resist the urge to engage in the usual shamanism, I set off to first define the problem properly.

So, in an attempt to diagnose I cut the patient back open and stuck my hands in the chest cavity to feel around. Let's look closely at the apache error logs1 :

[Tue Jun 26 05:42:31.703938 2018] [mpm_prefork:error] [pid 12260] AH00161: server reached MaxRequestWorkers setting, consider raising the MaxRequestWorkers setting

Looks like apache is hitting the maximum number of workers. Let's see what netstat is saying:

pizrk003 htdocs # netstat -an | egrep ':80|:443' | grep ESTABLISHED | awk '{print $5}' | grep -o -E "([0-9]{1,3}[\.]){3}[0-9]{1,3}" | sort -n | uniq -c | sort -nr

187 173.254.216.66

Here we see one IP address is slamming my blog with 187 unique requests (all ESTABLISHED state). A quick look at the apache config file confirms that this many requests indeed exceeds the max limit:

pizrk003 htdocs # cat /etc/apache2/modules.d/00_mpm.conf | grep 'MaxRequestWorkers'

# MaxRequestWorkers: Maximum number of child processes to serve requests

MaxRequestWorkers       150

MaxRequestWorkers       150

MaxRequestWorkers       150

MaxRequestWorkers       150

MaxRequestWorkers       150

Okay, so now we have the problem formally defined as "Apache cannot serve requests due to MaxRequestWorkers limit being exceeded".

Now to fix. Let's first try the obvious treatment of simply increasing the MaxRequestWorkers in the apache config to something like 256. Restart apache, and then netstat again:

pizrk003 htdocs # netstat -an | egrep ':80|:443' | grep ESTABLISHED | awk '{print $5}' | grep -o -E "([0-9]{1,3}[\.]){3}[0-9]{1,3}" | sort -n | uniq -c | sort -nr

256 94.230.208.147

Sadly, no dice. Now the spamola has simply increased to hit the new max (and from a new ip, no less). Looks like we will need to do something to limit spam requests from getting through. Unfortunately for us, a firewall solution is not readily available as iptables does not function "out-of-the-box" on this thing. Nevertheless, there might be a few things we can try at the apache layer for the time-being.

Mod_evasive

The module "mod_evasive" works by giving apache the functionality to automatically deny any ip address that requests X resource more than N times per I interval. Here's how I got this up and running:

First, add "www-apache/mod_evasive-1.10.1 ~arm64" to your package.accept_keywords file (or directory, if you prefer it that way)

echo '=www-apache/mod_evasive-1.10.1 ~arm64' >> /etc/portage/package.accept_keywords

For my fellow Gentoo (and arm architecture) n00bs out there, let me take a short interlude to explain what I've come to understand about how keywords work in Gentoo, and why we need to do the above in our case2.

From https://wiki.gentoo.org/wiki/KEYWORDS:

In an ebuild the KEYWORDS variable informs in which architectures the ebuild is stable or still in testing phase.

And from https://wiki.gentoo.org/wiki/ACCEPT_KEYWORDS:

The ACCEPT_KEYWORDS variable informs the package manager which ebuilds' KEYWORDS values it is allowed to accept.

...

Stable and unstable keywords

The default value of most profiles' ACCEPT_KEYWORDS variable is the architecture itself, like amd64 or arm.

In these cases, the package manager will only accept ebuilds whose KEYWORDS variable contains this architecture.

If the user wants to be able to install and work with ebuilds that are not considered production-ready yet, they can add the same architecture but with the ~ prefix to it, like so:

FILE /etc/portage/make.conf

ACCEPT_KEYWORDS="~amd64"

This, I think, is important for us to understand because mod_evasive is definitely not listed as stable for the arm64 architecture. That being said, this Gentoo's make.conf had the value of "**" for the ACCEPT_KEYWORDS, which, I think may already tell Portage to pull in untested/unstable packages and may also override whatever you set in package.accept_keywords. In other words, this may be an exercise in redundancy, but at least we all learned something!3.

Interlude over; let's keep moving. After getting your keywords situated, let's slap the "mod_evasive" USE flag in make.conf and emerge:

pizrk003 lobbes # emerge -av www-apache/mod_evasive

[...SNIP...]

These are the packages that would be merged, in order:

Calculating dependencies... done!

[ebuild  N     ] www-apache/mod_evasive-1.10.1-r1::gentoo  20 KiB

Total: 1 package (1 new), Size of downloads: 20 KiB

Would you like to merge these packages? [Yes/No] Yes

>>> Verifying ebuild manifests

>>> Emerging (1 of 1) www-apache/mod_evasive-1.10.1-r1::gentoo

>>> Installing (1 of 1) www-apache/mod_evasive-1.10.1-r1::gentoo

>>> Recording www-apache/mod_evasive in "world" favorites file...

>>> Jobs: 1 of 1 complete                           Load avg: 0.41, 0.12, 0.04

>>> Auto-cleaning packages...

>>> No outdated packages were found on your system.

* GNU info directory index is up-to-date.

[...SNIP...]

Seems good. And looky here, we now have a mod_evasive config file to edit:

pizrk003 lobbes # more /etc/apache2/modules.d/10_mod_evasive.conf

<IfDefine EVASIVE>

LoadModule evasive_module modules/mod_evasive.so

DOSHashTableSize 3097

DOSPageCount 5

DOSSiteCount 100

DOSPageInterval 2

DOSSiteInterval 2

DOSBlockingPeriod 10

# Set here an email to notify the DoS to someone

# (here is better to set the server administrator email)

DOSEmailNotify root

# Uncomment this line if you want to execute a specific command

# after the DoS detection

#DOSSystemCommand    "su - someuser -c '/sbin/... %s ...'"

# Specify the desired mod_evasive log location

DOSLogDir /var/log/apache2/evasive

# WHITELISTING IP ADDRESSES

# IP addresses of trusted clients can be whitelisted to insure they are never

# denied.  The purpose of whitelisting is to protect software, scripts, local

# searchbots, or other automated tools from being denied for requesting large

# amounts of data from the server.

#DOSWhitelist    127.0.0.*

#DOSWhitelist    172.16.1.*

</IfDefine>

# vim: ts=4 filetype=apache

Tweak this how you like. Here's a quick rundown on the interesting knobs:

DOSPageCount and DOSSiteCount: Amount of requests allowed per interval to unique pages and the site as a whole, respectively.

DOSPageInterval and DOSSiteInterval: Sets the interval (in seconds) for unique pages and the site as a whole, respectively.

DOSBlockingPeriod: Amount of time (in seconds) for the ip to be denied.

DOSWhitelist: Pretty self-explanatory, but allow specific IP addresses from being exempt from the above restrictions.

Let's also edit the apache config4 to load the mod_evasive module. Add  "-D EVASIVE" to the "APACHE2_OPTS" line:

APACHE2_OPTS="-D DEFAULT_VHOST -D INFO -D SSL -D SSL_DEFAULT_VHOST -D LANGUAGE -D PHP -D EVASIVE"

Now restart apache and do some stress-testing to confirm everything's working. Tweak your configuration as necessary.

Mod_limitipconn

While mod_evasive has utility, it didn't really address my particular problem, which was a single ip address exceeding the max amount of apache workers. Since the ips spamming my box were sending their requests at reasonable intervals5 mod_evasive wasn't much use unless I wanted to also block legit traffic. Enter mod_limitipconn. As the name suggests, this module gives apache the ability to limit unique ip addresses to a set number of max requests. Let's get it emerged and configured.

Like with mod_evasive, add the required bits to your make.conf, apache config, and package.keywords for mod_limitipconn:

echo '=www-apache/mod_limitipconn-0.24 ~arm64' >> /etc/portage/package.accept_keywords

=======================

cat /etc/portage/make.conf

[...SNIP...]

USE="mysql cgi php apache2 fpm apachetop mod_evasive mod_limitipconn -gtk3 -avahi -gnome -tls-heartbeat -gpm -X -libnotify -consolekit offensive ufw -dbus -bluetooth -systemd -wayland$

[...SNIP...]

=======================

cat /etc/conf.d/apache2

[...SNIP...]

APACHE2_OPTS="-D DEFAULT_VHOST -D INFO -D SSL -D SSL_DEFAULT_VHOST -D LANGUAGE -D PHP -D EVASIVE -D LIMITIPCONN"

[...SNIP...]

Now, I also had to add the following package mask to force Portage to pull in the earlier version of this thing6:

pizrk003 lobbes # more /etc/portage/package.mask/mod_limitipconn

>www-apache/mod_limitipconn-0.24

Now, emerge -av www-apache/mod_limitipconn-0.24. It should go smoothly.

Important: for this module to function, you will also need mod_status enabled with the "ExtendedStatus On". On this box, mod_status was already installed7, so we just need to define it by adding "-D STATUS" to the "APACHE2_OPTS=" line in the apache config:

"APACHE2_OPTS="-D DEFAULT_VHOST -D INFO -D SSL -D SSL_DEFAULT_VHOST -D LANGUAGE -D PHP -D EVASIVE -D LIMITIPCONN -D STATUS"

Now, time to configure mod_limitipconn by editing /etc/apache2/modules.d/27_mod_limitipconn.conf. I found this to be slightly tricky. The following worked for me, so feel free to use it as a general guide8:

<IfDefine LIMITIPCONN>

LoadModule limitipconn_module modules/mod_limitipconn.so

<Location /var/www/localhost/htdocs/>

MaxConnPerIP 10

# exempting images from the connection limit is often a good

# idea if your web page has lots of inline images, since these

# pages often generate a flurry of concurrent image requests

NoIPLimit blog/images/*

</Location>

<Location /mp3>

MaxConnPerIP 1

# In this case, all MIME types other than audio/mpeg and video*

# are exempt from the limit check

OnlyIPLimit audio/mpeg video

</Location>

<IfModule mod_limitipconn.c>

# Set a server-wide limit of 10 simultaneous downloads per IP,

# no matter what.

MaxConnPerIP 10

</IfModule>

</IfDefine>

Let's see if it works. One quick restart of apache and then netstat:

pizrk003 htdocs # netstat -an | egrep ':80|:443' | grep ESTABLISHED | awk '{print $5}' | grep -o -E "([0-9]{1,3}[\.]){3}[0-9]{1,3}" | sort -n | uniq -c | sort -nr

25 197.231.221.211

Mwahah, 25 much better than 256, huh? The 'raw' netstat I issued right before this was showing some 300 connection attempts from this same ip, but with the status of "TIME_WAIT" and "CLOSE_WAIT" instead of "ESTABLISHED". Sure enough, a few seconds later another netstat reveals:

pizrk003 htdocs # netstat -an | egrep ':80|:443' | grep ESTABLISHED | awk '{print $5}' | grep -o -E "([0-9]{1,3}[\.]){3}[0-9]{1,3}" | sort -n | uniq -c | sort -nr

Nothing! The robots have stopped knocking.

Hopefully this will be of use to you if you are trying to carve a webserver out of the Arm64 RockChip Gentoo. Now, if I can just get iptables working...

  1. located in /var/log/apache2/error_log []
  2. Gentoo experts out there, plox to correct me here []
  3. maybe? []
  4. located at /etc/conf.d/apache2 []
  5. one per second, roughly []
  6. as the "0.24-r2" version dun seem to work []
  7. see the config in  /etc/apache2/modules.d/00_mod_status.conf []
  8. if you want more info, please see this handy README I found: http://dominia.org/djao/limitipconn2-README  []