Compare commits
18 Commits
Author | SHA1 | Date | |
---|---|---|---|
93e59ab09b | |||
![]() |
5e33be47a7 | ||
![]() |
4473a9cdba | ||
49d616ef32 | |||
12f795bcf8 | |||
![]() |
24b21bffed | ||
![]() |
7f5d7e9082 | ||
![]() |
653b482ec5 | ||
![]() |
ecc4bdf0a7 | ||
![]() |
a45b88da4b | ||
![]() |
e460ba4727 | ||
![]() |
910e428baa | ||
8e6bc69713 | |||
![]() |
f624f5677b | ||
![]() |
2064099696 | ||
![]() |
67209075c5 | ||
![]() |
336d686ca2 | ||
![]() |
4f0e24f1f7 |
@@ -131,7 +131,7 @@ Archweb provides multiple management commands for importing various sorts of dat
|
|||||||
* reporead_inotify - Watches a templated patch for updates of *.files.tar.gz to update Arch databases with.
|
* reporead_inotify - Watches a templated patch for updates of *.files.tar.gz to update Arch databases with.
|
||||||
* donor_import - Import a single donator from a mail passed to stdin
|
* donor_import - Import a single donator from a mail passed to stdin
|
||||||
* mirrorcheck - Poll every active mirror URLs to store the lastsnyc time and record network timing details.
|
* mirrorcheck - Poll every active mirror URLs to store the lastsnyc time and record network timing details.
|
||||||
* mirrorresolv - Poll every active mirror URLs and determine wheteher they have IP4 and/or IPv6 addresses.
|
* mirrorresolv - Poll every active mirror URLs and determine whether they have IP4 and/or IPv6 addresses.
|
||||||
* populate_signoffs - retrieves the latest commit message of a signoff-eligible package.
|
* populate_signoffs - retrieves the latest commit message of a signoff-eligible package.
|
||||||
* update_planet - Import all feeds for users who have a valid website and website_rss in their user profile.
|
* update_planet - Import all feeds for users who have a valid website and website_rss in their user profile.
|
||||||
* read_links - Reads a repo.links.db.tar.gz file and updates the Soname model.
|
* read_links - Reads a repo.links.db.tar.gz file and updates the Soname model.
|
||||||
|
@@ -59,7 +59,7 @@ class Command(BaseCommand):
|
|||||||
arches = Arch.objects.filter(agnostic=False)
|
arches = Arch.objects.filter(agnostic=False)
|
||||||
repos = Repo.objects.all()
|
repos = Repo.objects.all()
|
||||||
|
|
||||||
arch_path_map = {arch: None for arch in arches}
|
arch_path_map = dict.fromkeys(arches)
|
||||||
all_paths = set()
|
all_paths = set()
|
||||||
total_paths = 0
|
total_paths = 0
|
||||||
for arch in arches:
|
for arch in arches:
|
||||||
|
@@ -72,7 +72,7 @@ class Command(BaseCommand):
|
|||||||
arches = Arch.objects.filter(agnostic=False)
|
arches = Arch.objects.filter(agnostic=False)
|
||||||
repos = Repo.objects.all()
|
repos = Repo.objects.all()
|
||||||
|
|
||||||
arch_path_map = {arch: None for arch in arches}
|
arch_path_map = dict.fromkeys(arches)
|
||||||
all_paths = set()
|
all_paths = set()
|
||||||
total_paths = 0
|
total_paths = 0
|
||||||
for arch in arches:
|
for arch in arches:
|
||||||
|
@@ -1,20 +1,20 @@
|
|||||||
version: '2'
|
version: '2'
|
||||||
|
|
||||||
# Run the following once:
|
# Run the following once:
|
||||||
# docker compose run --rm packages_web python manage.py migrate
|
# docker compose run --rm archweb_web python manage.py migrate
|
||||||
# docker compose run --rm packages_web python manage.py loaddata main/fixtures/arches.json
|
# docker compose run --rm archweb_web python manage.py loaddata main/fixtures/arches.json
|
||||||
# docker compose run --rm packages_web python manage.py loaddata main/fixtures/repos.json
|
# docker compose run --rm archweb_web python manage.py loaddata main/fixtures/repos.json
|
||||||
# docker compose run --rm packages_web python manage.py createsuperuser --username=admin --email=admin@artixweb.local
|
# docker compose run --rm archweb_web python manage.py createsuperuser --username=admin --email=admin@artixweb.local
|
||||||
## go to /admin and create a user according to overlay/devel/fixtures/user_profiles.json
|
## go to /admin and create a user according to overlay/devel/fixtures/user_profiles.json
|
||||||
## go to /admin/auth/user/2/change/ and add a name
|
## go to /admin/auth/user/2/change/ and add a name
|
||||||
# docker compose run --rm packages_web python manage.py generate_keyring pgp.surfnet.nl ./config/keyring
|
# docker compose run --rm archweb_web python manage.py generate_keyring pgp.surfnet.nl ./config/keyring
|
||||||
# docker compose run --rm packages_web python manage.py pgp_import ./config/keyring
|
# docker compose run --rm archweb_web python manage.py pgp_import ./config/keyring
|
||||||
## go to /admin/devel/developerkey/ and set the owner (and parent) for the ownerless key
|
## go to /admin/devel/developerkey/ and set the owner (and parent) for the ownerless key
|
||||||
## go to /admin/sites/site/1/change/ and set the domain
|
## go to /admin/sites/site/1/change/ and set the domain
|
||||||
|
|
||||||
|
|
||||||
services:
|
services:
|
||||||
packages_web:
|
archweb_web:
|
||||||
container_name: artixweb-packages
|
container_name: artixweb-packages
|
||||||
build:
|
build:
|
||||||
context: ./
|
context: ./
|
||||||
@@ -25,7 +25,7 @@ services:
|
|||||||
volumes:
|
volumes:
|
||||||
- ./config:/usr/src/web/config
|
- ./config:/usr/src/web/config
|
||||||
|
|
||||||
packages_sync:
|
archweb_sync:
|
||||||
container_name: artixweb-sync
|
container_name: artixweb-sync
|
||||||
build:
|
build:
|
||||||
context: ./
|
context: ./
|
||||||
@@ -35,7 +35,7 @@ services:
|
|||||||
- ./config:/usr/src/web/config
|
- ./config:/usr/src/web/config
|
||||||
command: ./downloadpackages.sh
|
command: ./downloadpackages.sh
|
||||||
|
|
||||||
packages_nginx:
|
archweb_nginx:
|
||||||
container_name: artixweb-nginx
|
container_name: artixweb-nginx
|
||||||
image: linuxserver/nginx:latest
|
image: linuxserver/nginx:latest
|
||||||
restart: "no"
|
restart: "no"
|
||||||
|
@@ -10,7 +10,7 @@ def format_key(key_id):
|
|||||||
if len(key_id) in (8, 20):
|
if len(key_id) in (8, 20):
|
||||||
return '0x%s' % key_id
|
return '0x%s' % key_id
|
||||||
elif len(key_id) == 40:
|
elif len(key_id) == 40:
|
||||||
# normal display format is 5 groups of 4 hex chars seperated by spaces,
|
# normal display format is 5 groups of 4 hex chars separated by spaces,
|
||||||
# double space, then 5 more groups of 4 hex chars
|
# double space, then 5 more groups of 4 hex chars
|
||||||
split = tuple(key_id[i:i + 4] for i in range(0, 40, 4))
|
split = tuple(key_id[i:i + 4] for i in range(0, 40, 4))
|
||||||
return '%s\u00a0 %s' % (' '.join(split[0:5]), ' '.join(split[5:10]))
|
return '%s\u00a0 %s' % (' '.join(split[0:5]), ' '.join(split[5:10]))
|
||||||
|
@@ -184,12 +184,26 @@ def check_rsync_url(mirror_url, location, timeout):
|
|||||||
with open(os.devnull, 'w') as devnull:
|
with open(os.devnull, 'w') as devnull:
|
||||||
if logger.isEnabledFor(logging.DEBUG):
|
if logger.isEnabledFor(logging.DEBUG):
|
||||||
logger.debug("rsync cmd: %s", ' '.join(rsync_cmd))
|
logger.debug("rsync cmd: %s", ' '.join(rsync_cmd))
|
||||||
|
|
||||||
start = time.time()
|
start = time.time()
|
||||||
proc = subprocess.Popen(rsync_cmd, stdout=devnull, stderr=subprocess.PIPE)
|
timeout_expired = False
|
||||||
_, errdata = proc.communicate()
|
# add an arbitrary 5-second buffer to ensure the process completes and to catch actual rsync timeouts.
|
||||||
end = time.time()
|
rsync_subprocess_timeout = timeout + 5
|
||||||
log.duration = end - start
|
try:
|
||||||
if proc.returncode != 0:
|
proc = subprocess.Popen(rsync_cmd, stdout=devnull, stderr=subprocess.PIPE)
|
||||||
|
_, errdata = proc.communicate(timeout=rsync_subprocess_timeout)
|
||||||
|
|
||||||
|
end = time.time()
|
||||||
|
log.duration = end - start
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
timeout_expired = True
|
||||||
|
proc.kill()
|
||||||
|
logger.debug("rsync command timeout error: %s, %s", url, errdata)
|
||||||
|
log.is_success = False
|
||||||
|
log.duration = None
|
||||||
|
log.error = f"rsync subprocess killed after {rsync_subprocess_timeout} seconds"
|
||||||
|
|
||||||
|
if proc.returncode != 0 and not timeout_expired:
|
||||||
logger.debug("error: %s, %s", url, errdata)
|
logger.debug("error: %s, %s", url, errdata)
|
||||||
log.is_success = False
|
log.is_success = False
|
||||||
log.error = errdata.strip().decode('utf-8')
|
log.error = errdata.strip().decode('utf-8')
|
||||||
@@ -197,7 +211,7 @@ def check_rsync_url(mirror_url, location, timeout):
|
|||||||
# don't record a duration as it is misleading
|
# don't record a duration as it is misleading
|
||||||
if proc.returncode in (1, 30, 35):
|
if proc.returncode in (1, 30, 35):
|
||||||
log.duration = None
|
log.duration = None
|
||||||
else:
|
elif not timeout_expired:
|
||||||
logger.debug("success: %s, %.2f", url, log.duration)
|
logger.debug("success: %s, %.2f", url, log.duration)
|
||||||
if os.path.exists(lastsync_path):
|
if os.path.exists(lastsync_path):
|
||||||
with open(lastsync_path, 'r') as lastsync:
|
with open(lastsync_path, 'r') as lastsync:
|
||||||
|
@@ -27,7 +27,7 @@ def test_mirrorurl_get_full_url(mirrorurl):
|
|||||||
|
|
||||||
def test_mirror_url_clean(mirrorurl):
|
def test_mirror_url_clean(mirrorurl):
|
||||||
mirrorurl.clean()
|
mirrorurl.clean()
|
||||||
# TOOD(jelle): this expects HOSTNAME to resolve, maybe mock
|
# TODO(jelle): this expects HOSTNAME to resolve, maybe mock
|
||||||
assert mirrorurl.has_ipv4
|
assert mirrorurl.has_ipv4
|
||||||
# requires ipv6 on host... mock?
|
# requires ipv6 on host... mock?
|
||||||
# assert mirrorurl.has_ipv6 == True
|
# assert mirrorurl.has_ipv6 == True
|
||||||
|
@@ -67,7 +67,7 @@ def test_sort(client, package):
|
|||||||
def test_packages(client, package):
|
def test_packages(client, package):
|
||||||
response = client.get('/opensearch/packages/')
|
response = client.get('/opensearch/packages/')
|
||||||
assert response.status_code == 200
|
assert response.status_code == 200
|
||||||
assert 'template="example.com/opensearch/packages/"' in response.content.decode()
|
assert 'template="http://example.com/opensearch/packages/"' in response.content.decode()
|
||||||
|
|
||||||
|
|
||||||
def test_packages_suggest(client, package):
|
def test_packages_suggest(client, package):
|
||||||
|
@@ -25,7 +25,7 @@ def opensearch(request):
|
|||||||
current_site = Site.objects.get_current()
|
current_site = Site.objects.get_current()
|
||||||
|
|
||||||
return render(request, 'packages/opensearch.xml',
|
return render(request, 'packages/opensearch.xml',
|
||||||
{'domain': current_site.domain},
|
{'domain': f'{request.scheme}://{current_site.domain}'},
|
||||||
content_type='application/opensearchdescription+xml')
|
content_type='application/opensearchdescription+xml')
|
||||||
|
|
||||||
|
|
||||||
|
@@ -31,7 +31,7 @@ def index(request):
|
|||||||
'news_updates': News.objects.order_by('-postdate', '-id')[:15],
|
'news_updates': News.objects.order_by('-postdate', '-id')[:15],
|
||||||
'pkg_updates': updates,
|
'pkg_updates': updates,
|
||||||
'staff_groups': StaffGroup.objects.all(),
|
'staff_groups': StaffGroup.objects.all(),
|
||||||
'domain': current_site.domain,
|
'domain': f'{request.scheme}://{current_site.domain}',
|
||||||
}
|
}
|
||||||
return render(request, 'public/index.html', context)
|
return render(request, 'public/index.html', context)
|
||||||
|
|
||||||
|
@@ -1,5 +1,5 @@
|
|||||||
-e git+https://github.com/fredj/cssmin.git@master#egg=cssmin
|
-e git+https://github.com/fredj/cssmin.git@master#egg=cssmin
|
||||||
Django==5.0.11
|
Django==5.0.14
|
||||||
IPy==1.1
|
IPy==1.1
|
||||||
Markdown==3.3.7
|
Markdown==3.3.7
|
||||||
bencode.py==4.0.0
|
bencode.py==4.0.0
|
||||||
|
@@ -7,7 +7,7 @@
|
|||||||
<div class="box">
|
<div class="box">
|
||||||
|
|
||||||
<h2>Tier 0 Mirror usage information</h2>
|
<h2>Tier 0 Mirror usage information</h2>
|
||||||
<p>Arch Linux Tier 0 mirror on <a href="https://repos.archlinux.org">repos.archlinux.org</a> which can be used if to obtain the absolute latest packages. The mirror is protected with a HTTP Basic Auth password unique per Staff member.</p>
|
<p>Arch Linux Tier 0 mirror on <a href="https://repos.archlinux.org">repos.archlinux.org</a> which can be used if to obtain the absolute latest packages. The mirror is protected with an HTTP Basic Auth password unique per Staff member.</p>
|
||||||
{% if mirror_url %}
|
{% if mirror_url %}
|
||||||
<code id="serverinfo">Server = {{ mirror_url }}</code> <button id="copybutton">Copy to clipboard</button>
|
<code id="serverinfo">Server = {{ mirror_url }}</code> <button id="copybutton">Copy to clipboard</button>
|
||||||
|
|
||||||
|
@@ -84,6 +84,8 @@
|
|||||||
|
|
||||||
<h3>Past donors</h3>
|
<h3>Past donors</h3>
|
||||||
|
|
||||||
|
<p><a href="http://www.dotcom-monitor.com/" title="Dotcom-Monitor">Dotcom-Monitor</a> & <a href="https://www.loadview-testing.com/" title="LoadView">LoadView</a></p>
|
||||||
|
|
||||||
<div id="donor-list">
|
<div id="donor-list">
|
||||||
<ul>
|
<ul>
|
||||||
{% for donor in donors %}
|
{% for donor in donors %}
|
||||||
|
@@ -93,6 +93,14 @@
|
|||||||
|
|
||||||
<p>Official virtual machine images are available for download on our <a href="https://gitlab.archlinux.org/archlinux/arch-boxes/-/packages">GitLab instance</a>, more information is available in the <a href="https://gitlab.archlinux.org/archlinux/arch-boxes/">README</a>.</p>
|
<p>Official virtual machine images are available for download on our <a href="https://gitlab.archlinux.org/archlinux/arch-boxes/-/packages">GitLab instance</a>, more information is available in the <a href="https://gitlab.archlinux.org/archlinux/arch-boxes/">README</a>.</p>
|
||||||
|
|
||||||
|
<h3>WSL images</h3>
|
||||||
|
|
||||||
|
<p>The official WSL image can be installed with the following command (in a PowerShell prompt from a Windows system with WSL 2 installed):</p>
|
||||||
|
<code>wsl --install archlinux</code>
|
||||||
|
|
||||||
|
<p>It is also available for download on <a href="https://geo.mirror.pkgbuild.com/wsl/latest">mirrors</a>.</p>
|
||||||
|
<p>More information available in the <a href="https://wiki.archlinux.org/title/Install_Arch_Linux_on_WSL">Wiki</a>.</p>
|
||||||
|
|
||||||
<h3 id="http-downloads">HTTP Direct Downloads</h3>
|
<h3 id="http-downloads">HTTP Direct Downloads</h3>
|
||||||
|
|
||||||
<p>In addition to the BitTorrent links above, install images can also be
|
<p>In addition to the BitTorrent links above, install images can also be
|
||||||
@@ -132,10 +140,10 @@
|
|||||||
<pre><code>$ b2sum -c b2sums.txt</code></pre>
|
<pre><code>$ b2sum -c b2sums.txt</code></pre>
|
||||||
|
|
||||||
To verify the PGP signature using Sequoia, first download the release signing key from WKD:
|
To verify the PGP signature using Sequoia, first download the release signing key from WKD:
|
||||||
<pre><code>$ sq network wkd fetch {{ release.wkd_email }} -o release-key.pgp</code></pre>
|
<pre><code>$ sq network wkd search {{ release.wkd_email }} --output release-key.pgp</code></pre>
|
||||||
|
|
||||||
With this signing key, verify the signature:
|
With this signing key, verify the signature:
|
||||||
<pre><code>$ sq verify --signer-file release-key.pgp --detached archlinux-{{ release.version }}-x86_64.iso.sig archlinux-{{ release.version }}-x86_64.iso</code></pre>
|
<pre><code>$ sq verify --signer-file release-key.pgp --signature-file archlinux-{{ release.version }}-x86_64.iso.sig archlinux-{{ release.version }}-x86_64.iso</code></pre>
|
||||||
|
|
||||||
Alternatively, using GnuPG, download the signing key from WKD:
|
Alternatively, using GnuPG, download the signing key from WKD:
|
||||||
<pre><code>$ gpg --auto-key-locate clear,wkd -v --locate-external-key {{ release.wkd_email }}</code></pre>
|
<pre><code>$ gpg --auto-key-locate clear,wkd -v --locate-external-key {{ release.wkd_email }}</code></pre>
|
||||||
|
Reference in New Issue
Block a user