Listing posts
Displaying posts 41 - 45 of 328 in total2023-10-27
If you use the ipv4-num database, it gives the IP address ranges in decimal format.
You can then take an incoming IP address and convert it to decimal:
1 2 3 | require 'ipaddr' ip = IPAddr.new "[10.0.2.15](https://10.0.2.15)" ip_int = ip.to_i |
Then you can query the ip-location-db using the decimal (i.e. where ip_int >= ip_range_start
and ip_int <= ip_range_end
).
Source reddit
~~~ * ~~~
2023-09-21
TODO
- https://www.toptal.com/ruby-on-rails/rails-6-features
- https://bogdanvlviv.com/posts/ruby/rails/what-is-new-in-rails-6_0.html
- https://medium.com/rubyinside/whats-coming-to-rails-6-0-8ec79eea66da
- https://prograils.com/posts/new-features-rails-6-multiple-databases-parallel-tests-action-mailbox-etc
- https://matthewhoelter.com/2018/09/18/deploying-ruby-on-rails-for-ubuntu-1804.html
- https://github.com/wg/wrk
- learn WebSocket using ActionCable
- -----
- cleanup fish/fisher
- install: rvm + function, nvm + function, node, yarn
- rails new rails6test
- -----
- testing howto
- news in JS?
- multi threading notes
News { ActiveRecord | ActionView | ActiveJob | ActionCable | Frontend | Email | ActiveStorage | Security | Core | Devel | Tools&Gems | Testing | Info }
Altro { Upgrade Exp. | References | Tools | Bench. | Books }
New in rails (till 2019-07-06)
⇑ ActiveRecord
ActiveRecord::Relation#pick
=>.limit(1).pluck(*column_names).first
- Model errors as objects @ Rails >= 6.1
- ActiveRecord
store
wrapper aroundserialize
to easily manage a hash value with attribute/keys methods accessors -- api, PR for dirty methods Model.optimizer_hints
PR andModel.annotate
PR- leaverage DB unique constraints:
- ActiveRecord's
reselect
,rewhere
,reorder
- ActiveRecord enum attribute (int value by name) -- api
Item.where(price: 10..)
-- endless ranges in where conditionsActiveRecord::Base.verbose_query_logs = true
: show the query source code line number- multi-db support: suggestions, rake tasks per db, replica option/ro db
- migrations path: PR1, PR2 -- define
migrations_paths
indatabase.yml
- eileen slides
connects_to
,connected_to
switch DB connection PR;connected_to?
check role and connectionrails db:schema:cache:dump
,rails db:schema:cache:clear
-- PR- fix query cache for multiple connections PR
Model.while_blocking_writes{}
blocks/deny DB writes
- migrations path: PR1, PR2 -- define
- Change SQLite 3 boolean serialization from t/f to use 1/0 => migrate old data in DB
Model#delegate_missing_to
-- PR- belongs_to new :default option -- es:
belongs_to :person, default: -> { Person.current }
- ActiveRecord
changes
in callbacks: PR and reference table find_each{|r| ... }
shortcut tofind_in_batches{|b| b.each{|r| ... } }
- also supports
limit
:Post.limit(10_000).find_each{|p| ... }
- also supports
- DB comments
db:migrate
creates development and test databases- migrations: sql expr as default value, es:
t.datetime :published_at, default: -> { 'NOW()' }
- models derive from
ApplicationRecord
instead ofActiveRecord::Base
- ActiveRecord
OR
support, es:Post.where('id = 1').or(Post.where('id = 2'))
- after commit shortcuts:
- ActiveRecord
Model.left_outer_joins
support - foreign keys now supported in create_table DSL
- Active Record ignored_columns -- no accessor methods and no show in queries
- multi context validations -- es:
record.valid?(:ctx_name)
People.in_batches(of: 100){|rel| rel.where... }
-- vedi ActiveRecord::Relation#in_batches- ActiveRecord.suppress -- silently disable record save in block (es:
Model.suppress{...}
) - Attributes API: define attr with a
Proc
, see commit (no moreserialize
misuse) - DB connection pool explained here -- sqlite has no pool
- Set database poolsize via
RAILS_MAX_THREADS
env ActiveRecord::Base.connection_pool.stat
-- status info hash
- Set database poolsize via
find_in_batches
got anend_at
optionactive_record.warn_on_records_fetched_greater_than
-- info
⇑ ActionView
- ActionText: rich text editor integrated with ActiveRecord and ActiveStorage (images/attachments)!! -- guide, intro
- ActionView helper
current_page?
form_with
=form_for
+form_tag
: article, PR1, PR2- new tag helpers -- es:
tag.div(...)
- better controller's
helpers
proxy to use user defined helpers ActionController::Parameters#dig
come per Hash#dig- ActionDispatch
Rails.application.reloader.wrap{}
callback - Per-form CSRF tokens --
config.per_form_csrf_tokens_enabled = false/true
protect_from_forgery
doen't run first anymore, it is simply queued as the other callbacks, use optionprepend: true
to set it as the first callback- NB:
request_forgery_protection
initializer removed from Rails--api
because usually not needed
- NB:
- controller/model's strong parameters
- controller actions default to head :ok if no template exists
- helpers
div_for
andcontent_tag_for
will be gone in Rails 5 => recordtaghelper gem - introduced the
#{partial_name}_iteration
local variable in partials rendered with a collection - live streaming for persistent connections
- caching/faster rendering:
- template dependencies with wildcard support
- strong ETag --
Response#strong_etag=, weak_etag=, fresh_when, stale?
- views cache control -- es:
fresh_when
- declarative etags
⇑ ActiveJob / background jobs
- ActiveJob -- interface for existing queuing systems
- vedi anche: ActionMailer#deliver_later, GlobalID
retry_job
,retry_on
anddiscard_on
catch multiple exceptions- ActiveJob priority support
config.active_job.queue_adapter = :async
-- run jobs in threads a la sucker_punchqueue_name_prefix
-- PR
⇑ ActionCable / websockets
- ActionCable channel_prefix in your cable.yml
- ActionCable -- websockets: live features, chat, notifications | via redis or postgres
- see
ActionController::Renderer
to render views, and usepuma
as a separate process in production
- see
⇑ Frontend
- webpack integration: PR1, PR2 (make default), hp
- yarn javascript package manager used by default -- PR1, PR2, folders
- removed jQuery; vanilla UJS -- guide, jquery-rails, jquery-ujs
- turbolinks:
- turbolinks 5 and 📺 railsconf video
- moved to a separate repo
data-turbolinks-track="true"
moved todata-turbolinks-track="reload"
- Sprockets gzip files
- Sprockets 4 upgrade -- source maps, manifest.js, ES6 support
ActionMailbox
to process incoming emails -- guide- Action Mailer Preprocessing -- eg:
InvitationMailer.with(invitee: person).account_invitation.deliver_later
, see also - email previews
- ActionMailer
rescue_from
come nel controller
⇑ ActiveStorage / upload to cloud -- guide
- ActiveStorage/ActiveJob Mirror direct uploads
- include blob via
Model.with_attached_columnname
-- avoids N+1 query - video/pdf preview
- custom previewers
config.active_storage.routes_prefix = '/files'
custom route prefix
⇑ Security & Paranoia
- MessageEncryptor -- enc/dec strings
- logger/inspect's per model attributes filter -- es:
Model.filter_attributes = [:iban, :cf]
config.action_dispatch.use_cookies_with_metadata = true
-- Add Purpose/Name Metadata to Cookies to enhance security- ActiveRecord.
has_secure_password
: you can specify the attribute name - app-wide
config.force_ssl
deprecatesActionController#force_ssl
- new default headers
X-Download-Options: noopen
andX-Permitted-Cross-Domain-Policies: none
- Content-Security-Policy header DSL, moz ref -- disabled by default in 5.2
- credentials storage PR, guide --
rails credentials:edit
- multi env support:
config/credentials/production.yml.enc
takes precedence overconfig/credentials.yml.enc
, userails credentials:edit --environment staging
to edit specific file secrets PR, guide -- eg:rails secrets:edit
,rails secrets:show
- multi env support:
- secure cookies server side enforced expire time -- es:
cookies.signed[:user_name] = { value: "bob", expires: 2.hours }
- SSL exclude option:
config.ssl_options = { redirect: { exclude: -> request { request.path !~ /healthcheck/ } } }
- Active Support's way to write to a file atomically (thread safe) with
File.atomic_write
⇑ Core/config changes
- new Zeitwerk library loader: blog post, HP
- HTTP early hints/preloading
preload_link_tag
helper: preload + http2 early hints- HTTP2 early hints, PR
- bootsnap installed by default -- TODO: crontab for periodic clean of tmp/cache
- routes: Custom url helpers and polymorphic mapping
- monkey patching is the past:
- use ruby refinements instead of monkey patching
- Deprecate
alias_method_chain
in favor ofModule#prepend
-- howto
- rails API mode: hp
rake xxx:yyy
tasks proxied byrails xxx:yyy
- config
serve_static_files
moved topublic_file_server.enabled
⇑ Development
rails console --sandbox
don't write changes to DB -- PR+cfg- byebug > 8.2.1 is faster
- faster dev reload: set in Gemfile
group :development{gem 'listen', '~> 3.0.4'}
⇑ Tools / External gems
- Router Visualizer: PR, sample -- install graphviz
dot
command, runRails.application.routes.router.visualizer
and save the string to an html file - ActiveRecord XML serialization moved in a gem
- Rails-observers -- separa le callbacks dai models/controllers
- ActiveResource -- consume json via rest
⇑ Testing
- Capybara Integration with Rails (AKA System Tests) -- with selenium and chrome, guide, parallel tests
⇑ Info / Minor changes / Good to know
Array#extract!
-- removes and returns the elements for which the block returns a true value (come select ma modifica la variabile)- execjs: mini_racer replaces therubyracer
rails notes
custom tags -- PRrails notes
task -- shows search code for FIXME/OPTIMIZE/TODO- 📺 Rails 5 video tour by DHH on youtube
rails new
adds a defaultconfig/puma.rb
- String#parameterize per creare id ai titoli dei blog posts
- Array#inquiry.any?(:xxx) -- finds symbols and strings matching xxx
- locale yaml files auto-reloaded in development
- date and time:
DateTime.now.prev_occurring(:monday)
-- PR- Date/Time
#on_weekend?
and#on_weekday?
:D - Time vs DateTime -- commit
- Enumerable#pluck
⇑ Upgrade experiences -- google search
⇑ References
- what's new: https://medium.com/rubyinside/whats-coming-to-rails-6-0-8ec79eea66da
- https://bogdanvlviv.com/posts/ruby/rails/what-is-new-in-rails-6_0.html
- -------------
- https://weblog.rubyonrails.org
- https://guides.rubyonrails.org/upgrading_ruby_on_rails.html
- https://edgeguides.rubyonrails.org/6_0_release_notes.html
- https://guides.rubyonrails.org/5_2_release_notes.html
- https://guides.rubyonrails.org/5_1_release_notes.html
- https://guides.rubyonrails.org/5_0_release_notes.html
- https://guides.rubyonrails.org/4_2_release_notes.html
- https://guides.rubyonrails.org
⇑ Tools
- scaling ruby apps
- 📺 DHH youtube videos on writing sw well
- MiniMagick
- ruby's set: a collection of unordered values with no duplicates
obj.method(:method_name).source_location
=> mostra dov'e' definito il metodo<<~TAG ... TAG
al poso di<<-TAG...TAG
rimuove gli spazi di indentazione!- thor -- build powerful command-line interfaces
⇑ Benchmark howto
1 2 3 4 5 6 7 8 9 | require 'benchmark' a = [:a, :b] b = :b n = 10000000 Benchmark.bm do |x| x.report { n.times do !(Array(a) & Array(b)).empty? end } x.report { n.times do Array(a).include?(b) end } end |
1 2 3 4 5 6 7 8 | require 'benchmark/ips' # instructions per second require 'uri' uri = 'http://example.com/foos_json?foo=heyo' Benchmark.ips do |x| x.report('URI.parse') { URI.parse(uri) } end |
vedi esempi di barnchmark in Hash#deep_merge PR
⇑ Books
- rails6 (beta)
- rails5 test
~~~ * ~~~
2023-08-28
- http://www.dewinter.com/gnupg_howto/english/GPGMiniHowto-4.html
- https://github.com/ueno/ruby-gpgme/blob/master/README.rdoc
http://stackoverflow.com/questions/17140499/implementing-gpg-encryption-in-ruby - http://www.tecmint.com/linux-password-protect-files-with-encryption/ (via google)
- http://stackoverflow.com/questions/4128939/simple-encryption-in-ruby-without-external-gems
- https://gist.github.com/byu/99651
- https://en.wikipedia.org/wiki/Galois/Counter_Mode -- GCM
http://crypto.stackexchange.com/questions/17999/aes256-gcm-can-someone-explain-how-to-use-it-securely-ruby
http://www.rubydoc.info/stdlib/openssl/OpenSSL/Cipher - http://stackoverflow.com/questions/20906839/does-ruby-1-9-3-support-aes-ni-in-the-openssl-module
AES-256-CBC openssl compatible example
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | #!/usr/bin/env ruby require 'openssl' require 'base64' plain = "hello world" puts '----- ENCODE ---------------------------------------------------------------' # https://stackoverflow.com/questions/32346466/encrypt-file-with-ruby-openssl-and-decrypt-with-command-line cipher = OpenSSL::Cipher.new('AES-256-CBC').encrypt cipher.iv = iv = cipher.random_iv # https://ruby-doc.org/stdlib-2.4.3/libdoc/openssl/rdoc/OpenSSL/PKCS5.html pass = "foobar" salt = OpenSSL::Random.random_bytes(16) iter = 20000 key_len = cipher.key_len #16 digest = OpenSSL::Digest::SHA256.new key = OpenSSL::PKCS5.pbkdf2_hmac(pass, salt, iter, key_len, digest) cipher.key = key puts "salt = #{salt.unpack('H*').first}" puts "key = #{key .unpack('H*').first}" puts "iv = #{iv .unpack('H*').first}" encrypted = cipher.update plain encrypted << cipher.final encrypted_b64 = Base64.strict_encode64 encrypted puts "ruby enc = #{encrypted_b64}" print "ossl enc = " system "echo -n #{plain} | openssl enc -aes-256-cbc -iv #{iv.unpack('H*').first} -K #{key.unpack('H*').first} -e | base64" puts '----- DECODE ---------------------------------------------------------------' cipher = OpenSSL::Cipher.new('AES-256-CBC').decrypt cipher.iv = iv cipher.key = key puts "ruby dec = #{cipher.update(encrypted) + cipher.final}" print "ossl dec = " system "echo -n #{encrypted_b64} | base64 -d | openssl enc -aes-256-cbc -iv #{iv.unpack('H*').first} -K #{key.unpack('H*').first} -d" |
Simple string encoding
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | # https://stackoverflow.com/questions/11044324/how-to-encrypt-files-with-ruby # https://ruby-doc.com/stdlib/libdoc/openssl/rdoc/OpenSSL/Cipher.html#class-OpenSSL::Cipher-label-Encrypting+and+decrypting+some+data # p = String.aes_params # 'test'.aes_encrypt(**p).aes_decrypt(**p) module StringUtils def self.included(base) base.extend ClassMethods end module ClassMethods # get random params for encryption def aes_params cipher = OpenSSL::Cipher::AES.new(256, :CBC) cipher.encrypt { key: cipher.random_key, iv: cipher.random_iv } end # aes_params end # ClassMethods def aes_encrypt(key:, iv:) cipher = OpenSSL::Cipher::AES.new(256, :CBC) cipher.encrypt cipher.key = key cipher.iv = iv cipher.update(self) + cipher.final end # aes_encrypt def aes_decrypt(key:, iv:) decipher = OpenSSL::Cipher::AES.new(256, :CBC) decipher.decrypt decipher.key = key decipher.iv = iv decipher.update(self) + decipher.final end # aes_decrypt def b64_encode = Base64.strict_encode64(self) def b64_decode = Base64.strict_decode64(self) end String.send :include, StringUtils |
~~~ * ~~~
2023-08-24
Configuration
The following instructions use the wonderful proxy.sh VPN provider: they are cheap, transparent, strongly privacy oriented, and they offer many servers, proxies and scriptable web APIs!
If you want to buy an account then click the following banner:
or just start installing and configuring the software:
1 | apt-get install openvpn netselect |
Create the proxysh.auth
text file containing your VPN credentials:
1 2 3 | mkdir -p /path/to/myvpn cd /path/to/myvpn echo -e "myusername\nmypassword" > proxysh.auth |
Setup an openvpn config updater script, say proxysh-update.sh
and substitute username
, password
with your VPN credentials:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 | #!/bin/bash OVPND=/path/to/openvpn/config cat > $OVPND/proxysh.conf.tmp <<VPNCFG # https://openvpn.net/index.php/open-source/documentation/manuals/65-openvpn-20x-manpage.html client dev tun proto tcp port 443 auth SHA512 auth-user-pass $OVPND/proxysh.auth cipher AES-256-CBC remote-cert-tls server resolv-retry infinite verb 3 reneg-sec 0 route-method exe route-delay 2 mute 2 mute-replay-warnings nobind comp-lzo # https://community.openvpn.net/openvpn/wiki/IgnoreRedirectGateway #pull-filter ignore redirect-gateway # args: tun_dev tun_mtu link_mtu local_ip remote_ip [init|restart] up-restart script-security 2 up $OVPND/proxysh-cmd-up.sh #down $OVPND/proxysh-cmd-down.sh # ping every N seconds and restart after M without reply keepalive 30 90 <ca> -----BEGIN CERTIFICATE----- MIIGaDCCBFCgAwIBAgIJAND7im/kkgtyMA0GCSqGSIb3DQEBBQUAMH8xCzAJBgNV BAYTAlNDMQswCQYDVQQIEwJWQTERMA8GA1UEBxMIVmljdG9yaWExETAPBgNVBAoT CFByb3h5LnNoMREwDwYDVQQDEwhwcm94eS5zaDELMAkGA1UEKRMCSVQxHTAbBgkq hkiG9w0BCQEWDmFkbWluQHByb3h5LnNoMB4XDTE0MDQxMDE3MDYwN1oXDTI0MDQw NzE3MDYwN1owfzELMAkGA1UEBhMCU0MxCzAJBgNVBAgTAlZBMREwDwYDVQQHEwhW aWN0b3JpYTERMA8GA1UEChMIUHJveHkuc2gxETAPBgNVBAMTCHByb3h5LnNoMQsw CQYDVQQpEwJJVDEdMBsGCSqGSIb3DQEJARYOYWRtaW5AcHJveHkuc2gwggIiMA0G CSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCudxcgt15bZsiW8iW2md3CKe2zrPqJ 6OBcO2yhn8Tkb7S7IHaDFhiUyHeN9Z4GVKNpbMbWxr3Bo9T/VZZUlwfoG2lwkucf 9Wry7a0aLzZGlA1SKngBrTzAo9cvKC+qadD1DrOrqLppRozYDtZZhkiKiOMghbIu V763dRiMnC0XQM4CCORXJPwC35nkFtmAdKcAFrA1aXOwv+KF/pK4IgHmRCI+lREe 52iPuIzoBlr7Nlivu8f4Dw3nYMZOVtWHKay1C3NJSdPUWLjreJYXlfvisd/78dTA KqOZ34GX6Xtc9ux1WhjDYzFz8DvgkSM5BCHfyQNZIAAgj1Os/GehBdZjBoDt+crv lL7PIwDOZiqoO76Kpqqz6NSHnut/PuJ/o3xUNMX67+cj2C3VbXArfqqNsb3viBbG Ohd+vN+z5c1+xn1j2D0ZAD3i678Mw8D3xYEF7mcTtQs8W8dHGxsxO761YHyCAZl7 z0+g7TpLvOnoCpQ07AwzAk3I2M5hLIgaIaaFOIEhCiLQNDVFE9gXczwEAT+nyn+Z TTNyS1DOi7iP2j++n+6EONamR92gGe1jTaTDovhcYeFkrToyfWQ5lIKxHb1xyp3v gPpwTZFDC5CT/unAyPNf36REJM+ZQZLFwmrzO/1DXBxNVDwGqnFzI+CAzOBUBqLN A910x7pjvyu9hQIDAQABo4HmMIHjMB0GA1UdDgQWBBR20DqwFm/reSSYZ2sEp1j1 GFgYjjCBswYDVR0jBIGrMIGogBR20DqwFm/reSSYZ2sEp1j1GFgYjqGBhKSBgTB/ MQswCQYDVQQGEwJTQzELMAkGA1UECBMCVkExETAPBgNVBAcTCFZpY3RvcmlhMREw DwYDVQQKEwhQcm94eS5zaDERMA8GA1UEAxMIcHJveHkuc2gxCzAJBgNVBCkTAklU MR0wGwYJKoZIhvcNAQkBFg5hZG1pbkBwcm94eS5zaIIJAND7im/kkgtyMAwGA1Ud EwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADggIBAB5VEXyMqs8DLi3aVa2whsSRsx63 IAeroZqGrjUePnE0nSNoieM5tNYn2pLI0UJfaEWwu3IUJlALQfcbcmXPYARf0uxi 1rPoz0U6vIWdzv4YtEJUD0vCt9Z9XIUsFSmpruTbNAU1WUpCNun7p3ZckNqEmEzI f0cMWFaS0v8rxow5JDFB2WwCreNMsmk+RlKGrgKrIoi29Z8WZIBlYzltaKhEXUXm Q1PrP47LD5xi5K7VVKTSqYRZeKlpkGmUXVRPq0zkewB/dUy8m3qsogScUBpB2YOt Rpc4p3bSZsoMfet/iQSDf53HvztFsPVkEz4c0QGYFVnVQpXycQ8rqjrGOG0Vp3A+ v+Sj17YIGUJL8yM40vVFm3KDOZ0+HlRNwEY9AWjHdRH4bBysZAbmBq1ixrfA+MmD l2Kvb5jA156JW32MZd0xDqZHv+5UJE5HbnfqNf+6F//9orDGJh9ff4K8ENlTfXZ9 vl27rX46//fXpjwoS/pWtZxfBl5OVl8e13oz2wzvvcIEOH+R3oU1AimvPo6p0Eew d3uICbB8hvAnJrZJGL7POu/cvdxdY282PGpYQOsmnSyidiftbdbtTpxIfS8sHaJE 6pUsKleoGA04GoM1W+Zd4MVi8ns+vr7qI/Kijc+/PwNsmKOE+NHMUGfjbXYCyvMm TSMSym4Np+AmT7OX -----END CERTIFICATE----- </ca> #remote-random VPNCFG echo -n "`date +'%F %T'`: refresh config servers... " # get servers list with load < 80% post_auth=`cat $OVPND/proxysh.auth | tr "\n" "#" | sed -r 's/([^#]+)#([^#]+).*/u=\1\&p=\2/'` curl -s -X POST https://proxy.sh/api.php -d "$post_auth" | \ sed 's/^ *//' | tr -d "\n" | sed 's/<.server>/\0\n/g' | \ grep -Ev "Hub" | grep -Ev "load>(100|[5-9][0-9])" | \ sed -r 's/.*ress>(.+)<.add.*load>(.+)<.server_load.*/\1/' \ > $OVPND/proxysh.srv # check servers length if [ `cat $OVPND/proxysh.srv | wc -l` -eq 0 ]; then echo " DL ERROR!" exit fi # order servers by ping time (UDP) sudo netselect -s 64 `cat $OVPND/proxysh.srv | tr "\n" " "` 2> /dev/null | \ sed 's/.* /remote /' >> $OVPND/proxysh.conf.tmp # check servers length if [ `grep -E "remote [0-9]+" $OVPND/proxysh.conf.tmp | wc -l` -eq 0 ]; then echo " SORT ERROR!" exit fi # update the real config file chmod 600 $OVPND/proxysh.conf.tmp mv -f $OVPND/proxysh.conf.tmp $OVPND/proxysh.conf rm -f $OVPND/proxysh.srv echo "OK" |
Create an optional proxysh-cmd-up.sh
script that run after each openvpn start/restart:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | #!/bin/bash # script args: tun_dev tun_mtu link_mtu local_ip remote_ip [init|restart] # updating deluge listen_interface IP address if [ -d $HOME/.config/deluge ]; then cd $HOME/.config/deluge sed -i -r 's/("listen_interface": ")(.*)(")/\1'$4'\3/' ./core.conf if pgrep deluged > /dev/null ; then sudo -u cloud deluge-console "pause *" sudo -u cloud deluge-console "config -s listen_interface $4" ; sleep 3 sudo -u cloud deluge-console "resume *" fi cd - fi exit 0 |
Create a script to start and keep running the openvpn daemon say run.sh
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 | #!/usr/bin/sudo /bin/bash [ `whoami` != "root" ] && { echo "you must be root!" && exit 1; } cd /path/to/myvpn || { echo "pwd not found!" && exit 2; } # read current config for later usage tbl=10 dev=`ip link | paste - - | grep -iE "state up.+ether" | head -n 1 | sed -r 's/[0-9]+: ([^:]+).+/\1/'` # device name met=`ip route | grep -E "^default via.+dev $dev.+metric" | sed -r 's/^.+metric ([0-9]+)[^0-9]*/\1/' ` # metric sub=`ip route | grep -E ".+\/.+dev $dev .+metric $met" | sed -r 's/^([0-9.]+\/[0-9]+).+/\1/' ` # subnet src=`ip route | grep -E ".+\/.+dev $dev .+metric $met" | sed -r 's/.+src ([0-9.]+) .+/\1/' ` # source gwy=`ip route | grep -E ".+via.+dev $dev.+metric $met" | sed -r 's/.+via ([0-9.]+) .+/\1/' ` # gateway # echo -e "dev: $dev\nmet: $met\nsub: $sub\nsrc: $src\ngwy: $gwy" if [ -z "$tbl" -o -z "$dev" -o -z "$met" -o -z "$sub" -o -z "$src" -o -z "$gwy" ]; then echo "configuration error" exit fi flg_restart="/tmp/openvpn.restart" flg_iproute="/tmp/openvpn.routes" del_ext_routes_rules () { echo " deleting old routes..." ip route del default via $gwy table $tbl dev $dev # local gateway ip ip route del to $sub table $tbl dev $dev # local subnet ip rule del from $src table $tbl # local ip } # del_ext_routes_rules add_ext_routes_rules () { echo " adding routes..." ip rule add from $src table $tbl # local ip ip route add to $sub table $tbl dev $dev # local subnet ip route add default via $gwy table $tbl dev $dev # local gateway ip } # add_ext_routes_rules # allow ssh via on non vpn address while vpn is open # https://forums.openvpn.net/viewtopic.php?f=15&t=7163&start=20 # https://serverfault.com/questions/659955/allowing-ssh-on-a-server-with-an-active-openvpn-client allow_outside_connections_to_eth () { echo "`date +'%F %T'`: updating routes..." # fix random lock out by deleting previous rules/routes del_ext_routes_rules ; sleep 1 del_ext_routes_rules 2> /dev/null # paranoia # re-add rules/routes add_ext_routes_rules } # allow_outside_connections_to_eth # another idea to check connection is up: try pinging vpn server ip (ping -c1 -w5 -q \`ip route|grep via.*eth0|sed ...\`) # here instead we check SSH connectivity: ensure_ssh_from_outside_via_eth () { tun_ip=`ip route | grep "tun.*src" | cut -f 9 -d ' '` cur_ip=`dig -b $src @208.67.222.222 +short myip.opendns.com` #[ -n "$tun_ip" -a -n "$cur_ip" ] && \ # { nc -w 5 -z -s $tun_ip $cur_ip 22 || allow_outside_connections_to_eth; } if ! [ -n "$tun_ip" -a -n "$cur_ip" ] ; then echo "`date +'%F %T'`: tun/eth IPs unavailable, stopping VPN..." quit_openvpn return fi if nc -w 5 -z -s $tun_ip $cur_ip 22 ; then rm -f $flg_iproute else if ! [ -f "$flg_routes" ]; then allow_outside_connections_to_eth touch $flg_iproute else echo "`date +'%F %T'`: no access from outside, stopping VPN..." quit_openvpn rm -f $flg_iproute fi fi } # ensure_ssh_from_outside_via_eth quit_openvpn () { # clean status rm -f $flg_iproute del_ext_routes_rules pkill openvpn ; sleep 2 pgrep openvpn > /dev/null && pkill -KILL openvpn } # quit_openvpn canary_failed () { echo "`date +'%F %T'`: CANARY FAILED ($1)! stopping VPN & exiting..." quit_openvpn exit } # canary_failed # exit if requested if [ "$1" = "stop" ]; then quit_openvpn exit fi # infinite loop run/check/restart openvpn while : ; do ts=`date +'%F %T'` # test proxy.sh warrant canary at 3AM psh_canary="/tmp/proxysh-${ts:0:10}.canary" if [ ! -f $psh_canary -a "03" = "${ts:11:2}" ]; then echo "$ts: testing warrant canary..." rm -f /tmp/proxysh-*.canary # purge previous file wget -q -O - https://proxy.sh/canary | gzip -9 -c > $psh_canary # test messages list cnd='The below "warrant canary" has been generated on: ' #`date +%F` cn1="To this date, there has been no warrants, searches or seizures that have not been reported in our Transparency Report, and that have actually taken place. The sky is blue :)" cn2="No warrants, searches or seizures of any kind, other than those reported via our Transparency Report, have ever been performed on proxy.sh assets, including in the following locations:" # test messages presence zgrep "$cnd" $psh_canary # print generation date zgrep "$cn1" $psh_canary > /dev/null || canary_failed "cn1" zgrep "$cn2" $psh_canary > /dev/null || canary_failed "cn2" fi if ! pgrep openvpn > /dev/null ; then ./proxysh-update.sh echo "$ts: restarting openvpn..." echo "${ts:0:10}" > $flg_restart # update restart flag allow_outside_connections_to_eth > /dev/null 2>&1 openvpn --config proxysh.conf --daemon sleep 20 fi # check SSH is reachable from outside via eth0 every 10 minutes [ "${ts:15:1}" = "0" ] && ensure_ssh_from_outside_via_eth # restart daily at 1AM if [ "$(<$flg_restart)" != "${ts:0:10}" -a "01" = "${ts:11:2}" ]; then echo echo "$ts: daily stopping..." quit_openvpn fi #echo -en "$ts\r" sleep 15 done |
the ip
commands are used to keep allowing incoming traffic from the previous default gateway 192.168.1.1
to the initial ip 192.168.1.110
in order to continue accessing your host services from the outside.
Note: Rarely you may lost connectivity via your physical interface (eg: dig -b 192.168.1.110 ...) for apparently no reason... just restart the networking service and vpn:
1 | sudo pkill openvpn && sudo systemctl restart networking && ./run.sh |
this should not happen anymore thanks to the ensure_ssh_from_outside_via_eth
function.
Extras
- To know your public IPs while openvpn is running type:
1 2 | dig @208.67.222.222 myip.opendns.com # get VPN server IP dig -b 192.168.1.110 @208.67.222.222 myip.opendns.com # get ISP public IP |
- If you want you can use the proxy.sh available proxies:
1 2 3 | socks.proxy.sh:1080 # useable inside the vpn (no auth required) ext-eu.proxy.sh:1080 # useable outside the vpn (same VPN credential) ext-us.proxy.sh:1080 # useable outside the vpn (same VPN credential) |
Here you can find more info for your precious privacy:
- Warrant canary (checked in
run.sh
) - Network alerts (outage, maintenance)
- Transparency report (abuse complaints)
- Warrant canary (checked in
For stronger security you can also nest your VPN connections, but make sure that the routes which are set by the second vpn client do not replace the direct access to the first vpn server:
1 2 3 | # Note: VPN servers must have different IPs ip route add IP_OF_1st_VPN_SERVER dev WAN_INTERFACE # eth0/wlan0 ip route add IP_OF_2nd_VPN_SERVER dev 1st_VPN_INTERFACE # tun0 |
- In order to retain connectivity to your default interface and just create a new virtual interface
tun0
for a separate use, you can override pushed routes from the server by adding these lines to the configuration file:
1 2 3 4 | route 0.0.0.0 192.0.0.0 net_gateway route 64.0.0.0 192.0.0.0 net_gateway route 128.0.0.0 192.0.0.0 net_gateway route 192.0.0.0 192.0.0.0 net_gateway |
Run the vpn inside a network namespace
Run openvpn
telling it to not configure the tunX
interface:
1 2 3 4 5 | openvpn --config proxysh.conf \ --ifconfig-noexec --route-noexec \ --up proxysh-cmd-netns.sh \ --route-up proxysh-cmd-netns.sh \ --down proxysh-cmd-netns.sh |
namespace creation and network configuration are then managed by the proxysh-cmd-netns.sh
script1:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | #!/bin/sh case $script_type in up) ( ip netns list | grep -qs proxysh ) || ip netns add proxysh ip netns exec proxysh ip link set dev lo up ip link set dev "$1" up netns proxysh mtu "$2" ip netns exec proxysh ip addr add dev "$1" \ "$4/${ifconfig_netmask:-30}" \ ${ifconfig_broadcast:+broadcast "$ifconfig_broadcast"} if [ -n "$ifconfig_ipv6_local" ]; then ip netns exec proxysh ip addr add dev "$1" \ "$ifconfig_ipv6_local"/112 fi ;; route-up) ip netns exec proxysh ip route add default via "$route_vpn_gateway" if [ -n "$ifconfig_ipv6_remote" ]; then ip netns exec proxysh ip route add default via \ "$ifconfig_ipv6_remote" fi ;; down) # do not delete the namespace: keep it in case of server disconnection #ip netns delete proxysh ;; esac |
Once the VPN is up and running you can launch programs in the proxysh
network namespace as root
like this:
1 | ip netns exec proxysh su username -l -c "command arguments" |
or you can conveniently use this script:
1 2 3 4 5 6 7 8 9 10 11 12 | #!/usr/bin/sudo /bin/bash function die() { echo "USAGE: netns-exec namespace cmd [arguments...]" exit } ([ -z "$1" ] && die) || NS_NAME="$1" ; shift # check namespace ([ -z "$1" ] && die) || NS_CMD="$1" ; shift # check command RUN_CMD="$NS_CMD $@" ip netns exec $NS_NAME su $SUDO_USER -l -c "${RUN_CMD}" |
Source: OpenVPN manpage, Proxy.sh raspi howto, allow SSH via eth0, up/down scripts, UFW limit traffic on eth0 & paste.bin script, Deluge vpn/proxy guide, Proxy.sh SOCKS, Deluge VPN 1 and 2, Getting started with Proxy.sh VPN, Ignore pushed routes on Superuser and OpenVPN docs
Tips: test reachable port on server with NC, get SSH server key fingerprint, VPN services comparison
~~~ * ~~~
2023-08-23
Installation on debian
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | # check system compatibility modprobe configs # loads /proc/config.gz wget -q -O - https://raw.githubusercontent.com/docker/docker/master/contrib/check-config.sh | \ bash | tee docker-check.txt # install docker: key, repo, packages apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - # amd64 - x64 echo "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker-ce.list # armhf - x32 / raspberry pi / raspbian echo "deb [arch=armhf] https://download.docker.com/linux/raspbian $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker-ce.list apt-get update && apt-get install docker-ce # allow user to use docker usermod -aG docker username # test installation docker version docker info # run a simple test image docker run hello-world |
See also post install
for troubleshooting dns/network/remote access.
On raspberry pi just use curl -sSL https://get.docker.com | sh
(repo not working).
Configure daemon
- change docker data folder location
1 2 3 4 5 | mkdir -p /path/to/data chown root.root /path/to/data chmod 711 /path/to/data echo '{ "data-root": "/path/to/data" }' > /etc/docker/daemon.json systemctl restart docker |
1 | echo '{ "log-driver": "local" }' > /etc/docker/daemon.json |
Creating an image (ref, best practices)
1 2 3 4 5 6 7 8 9 10 11 12 | touch Dockerfile # and fill it docker build -t test-myimg . # create the image with a tag # test run image docker run -p 4000:80 test-myimg docker run -it test-myimg /bin/bash # run image detached/on background docker run -p 4000:80 -d --name tmi test-myimg docker container ls -a docker container stop <container_id> docker container start -i tmi # restart container |
Interact (ref)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | # run interactive shell into debian image (temporary) docker run --name prova --rm -it debian /bin/bash # run interactive shell into debian image docker run -it debian /bin/bash apt-get update apt-get install -y dialog nano ncdu apt-get install -y locales localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8 echo "LANG=en_US.utf8" >> /etc/environment rm -rf /var/lib/apt/lists/* docker commit e2b7329257ba myimg:v1 docker run --rm -it myimg:v1 /bin/bash # run a command in a running container docker exec -ti a123098734e bash -il docker stop a123098734e docker kill a123098734e |
Save & restore
1 2 3 4 5 6 7 8 9 10 | # dump image docker save imgname | gzip > imgname.tgz zcat imgname.tgz | docker load # dump container docker create --name=mytemp imgname docker export mytemp | gzip > imgname-container.tgz # flatten image layers (losing Dockerfile) from a container docker export <id> | docker import - imgname:tag |
Registry - Image repository
1 2 3 4 5 | # push image to gitlab registry docker login registry.gitlab.com docker tag test-myimg registry.gitlab.com/username/repo:tag # add new tag... docker rmi test-myimg # ...and remove the old tag docker push registry.gitlab.com/username/repo:tag |
DockerHub official base images links: debian, ruby, rails, redis, nginx.
Available free registry services:
Name | # Priv/Pub | Notes |
---|---|---|
gitlab | inf/ND | 1 prj x registry |
treescale | inf/inf | max 500 pulls & 50GB |
canister | 20/ND | very good service |
docker hub | 1/inf | perfect |
Running arm
image on x86
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | # https://ownyourbits.com/2018/06/27/running-and-building-arm-docker-containers-in-x86/ apt-get install qemu-user-static docker run \ -v /usr/bin/qemu-arm-static:/usr/bin/qemu-arm-static \ -e LANG=en_US.utf8 -ti --name myarmimg arm32v7/debian:wheezy [...] docker commit myarmimg myarmimg docker container prune -f docker run \ -v /usr/bin/qemu-arm-static:/usr/bin/qemu-arm-static \ -ti --rm --name myarmimg \ myarmimg /bin/bash -il |
Composer (ref, dl) - Services
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | # docker-compose.yml version: "3" services: web: image: username/repo:tag deploy: replicas: 5 resources: limits: cpus: "0.1" memory: 50M restart_policy: condition: on-failure ports: - "4000:80" networks: - webnet networks: webnet: |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | # install docker-compose curl -L -o /usr/local/bin/docker-compose https://github.com/docker/compose/releases/download/1.24.0-rc1/docker-compose-`uname -s`-`uname -m` chmod 755 /usr/local/bin/docker-compose docker swarm init docker stack deploy --with-registry-auth -c docker-compose.yml getstartedlab docker service ls docker service ps getstartedlab_web # or docker stack ps getstartedlab # change the yml file and restart service docker stack deploy --with-registry-auth -c docker-compose.yml getstartedlab docker service ps getstartedlab_web docker container prune -f # stop & destroy service docker stack rm getstartedlab docker container prune -f # leave the swarm docker swarm leave --force |
Machine (ref, dl) - SWARM/Provisioning
Remember to update the host firewall: open port 2376
and do not apply rate limits on port 22
.
On the fish shell you can install the useful omf plugin-docker-machine to easily select the current machine.
Without an official supported driver we can use the generic one. Install docker-ce on your worker nodes and then in your swarm manager host:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | # install docker-machine curl -L -o /usr/local/bin/docker-machine https://github.com/docker/machine/releases/download/v0.16.1/docker-machine-`uname -s`-`uname -m` chmod 755 /usr/local/bin/docker-machine # setup each VMs (this creates and shares the certificates for a secure # connetion between your client and the daemon runnig on the server) ssh-copy-id -i ~/.ssh/id_rsa user@ww.xx.yy.zz docker-machine create --driver generic --generic-ssh-key ~/.ssh/id_rsa \ --generic-ip-address=ww.xx.yy.zz myvm1 ssh-copy-id -i ~/.ssh/id_rsa user@ww.xx.yy.kk docker-machine create --driver generic --generic-ssh-key ~/.ssh/id_rsa \ --generic-ip-address=ww.xx.yy.kk myvm2 docker-machine ls # run a command via ssh in a VM docker-machine ssh myvm1 "ls -l" # use internal SSH lib docker-machine --native-ssh ssh myvm1 "bash -il" # use system SSH lib # set env to run all docker commands remotely on a VM eval $(docker-machine env myvm1) # on bash docker-machine use myvm1 # on fish + omf plugin-docker-machine # set VM1 to be a swarm manager docker-machine use myvm1 docker swarm init # --advertise-addr ww.xx.yy.zz docker swarm join-token worker # get token for adding worker nodes # set VM2 to join the swarm as a worker docker-machine use myvm2 docker swarm join --token SWMTKN-xxx ww.xx.yy.zz:2377 # check cluster status on your local machine... docker-machine ls # ...or on the manager node docker-machine use myvm1 docker node ls # locally login on your registry... docker-machine unset docker login registry.gitlab.com # ...then deploy the app on the swarm manager docker-machine use myvm1 docker stack deploy --with-registry-auth -c docker-compose.yml getstartedlab docker service ls docker service ps getstartedlab_web # access cluster from any VM's IP curl http://ww.xx.yy.zz:4000 curl http://ww.xx.yy.kk:4000 # eventually re-run "docker stack deploy ..." to apply changes # undo app deployment docker-machine use myvm1 docker stack rm getstartedlab # remove the swarm docker-machine ssh myvm2 "docker swarm leave" docker-machine ssh myvm1 "docker swarm leave --force" |
Stack / Deploy application
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | # docker-compose.yml version: "3" services: web: image: username/repo:tag deploy: replicas: 5 restart_policy: condition: on-failure resources: limits: cpus: "0.1" memory: 50M ports: - "80:80" networks: - webnet visualizer: image: dockersamples/visualizer:stable ports: - "8080:8080" volumes: - "/var/run/docker.sock:/var/run/docker.sock" deploy: placement: constraints: [node.role == manager] networks: - webnet redis: image: redis ports: - "6379:6379" volumes: - "/home/docker/data:/data" deploy: placement: constraints: [node.role == manager] command: redis-server --appendonly yes networks: - webnet networks: webnet: |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | docker-machine use myvm1 docker-machine ssh myvm1 "mkdir ./data" # create redis data folder # run stack / deploy app docker stack deploy -c docker-compose.yml getstartedlab docker stack ps getstartedlab # show deployed services and restart one docker service ls docker service update --force getstartedlab_web firefox http://<myvm1-ip>:8080/ # docker visualizer redis-cli -h <myvm1-ip> # interact with redis docker stack rm getstartedlab |
Init process to reap zombies and forward signals
- single process: tini (use
docker run --init
orinit: true
in docker-compose.yml) - multiprocess: s6 and s6-overlay
- init systems comparison
SWARM managers
- traefik: github, hp
- portainer (formerly ui-for-docker)
- swarmpit
- dry (terminal gui, one man prj)
- guides, tips and hints at dockerswarm.rocks (also on github)
Container-Host user remapping
You can map container users to the host ones for greater security.
- put
myuser:100000:65536
(start:length) in/etc/subuid
and/etc/subgid
, this defines the mapping id range 100000-165535 available to the host usermyuser
configure docker daemon to use the remapping specified for
myuser
:1 2
echo '{ "userns-remap": "myuser" }' > daemon.json systemctl restart docker
note that all images will reside in a /var/lib/docker subfolder named after
myuser
idsnow all your container user/group ids will be mapped to
100000+id
on the host
You can write up to 5 ranges in sub* files for each user, in this example we set identical ids for users 0-999 and map ids >=1000 to id+1:
1 2 | myuser:0:1000 myuser:1001:65536 |
UFW Firewall interactions
Docker bypasses UFW rules and published ports can be accessed from outside.
See a solution involving DOCKER-USER and ufw-user-forward/ufw-user-input chains.
Dockerizing Rails
- docker-rails-base -- preinstalled gems, multi stage, multi image, uses onbuild triggers
- dockerfile-rails -- Dockerfile extracted from Rails 7.1 by fly.io
- Kamal -- formerly MRSK, DHH solution, deploy web apps anywhere with zero downtime, guide posts
Terms:
service
= containers that only runs one/same image,task
= a single container running in a service,swarm
= a cluster of machines running Docker,stack
= a group of interrelated services orchestrated and scalable, defining and coordinating the functionality of an entire application.
Source: install, install@raspi, tutorial, overview, manage app data, config. daemon, config. containers,
Source for user mapping: docker docs, jujens.eu, ilya-bystrov
Useful tips: cleanup,
network host mode for nginx to get client real IP, limit ram/cpu usage, docker system prune -a -f
to remove all cache files
See also: thread swarm gui, docker swarm rocks