Skip to content
Commits on Source (46)
......@@ -43,30 +43,40 @@ Clean
### Configuration
Example Webapp configuraion:
Webapp service configuraion parameters and their defaults:
- SAFEMODE=False
- DJANGO_DEV_SERVER=True
- DJANGO_DEBUG=True
- SAFEMODE=false
- DJANGO_DEV_SERVER=true
- DJANGO_DEBUG=true
- DJANGO_LOG_LEVEL=ERROR
- ROSETTA_LOG_LEVEL=ERROR
- ROSETTA_TUNNEL_HOST=localhost # Not http or https
- ROSETTA_WEBAPP_HOST=
- ROSETTA_HOST=localhost
- ROSETTA_TUNNEL_HOST=localhost
- ROSETTA_WEBAPP_HOST=""
- ROSETTA_WEBAPP_PORT=8080
- LOCAL_DOCKER_REGISTRY_HOST=
- LOCAL_DOCKER_REGISTRY_PORT=5000
- ROSETTA_REGISTRY_HOST=proxy
- ROSETTA_REGISTRY_PORT=5000
- DJANGO_EMAIL_SERVICE=Sendgrid
- DJANGO_EMAIL_APIKEY=
- DJANGO_EMAIL_APIKEY=""
- DJANGO_EMAIL_FROM="Rosetta <notifications@rosetta.local>"
- DJANGO_PUBLIC_HTTP_HOST=http://localhost # Public facing, with http or https
- INVITATION_CODE=""
- OIDC_RP_CLIENT_ID=""
- OIDC_RP_CLIENT_SECRET=""
- OIDC_OP_AUTHORIZATION_ENDPOINT=""
- OIDC_OP_TOKEN_ENDPOINT=""
- OIDC_OP_JWKS_ENDPOINT=""
- DISABLE_LOCAL_AUTH=False
In Rosetta, only power users can:
- DISABLE_LOCAL_AUTH=false
Notes:
- `ROSETTA_TUNNEL_HOST` must not include http:// or https://
- `ROSETTA_REGISTRY_HOST` should be set to the same value as `ROSETTA_HOST` for production scenarios, in order to be secured unders SSL. The `standaloneworker` is configured to treat the following hosts (and ports) as unsecure registies, where it can connect without a valid certificate: `proxy:5000`,`dregistry:5000` and `rosetta.platform:5000`.
- `ROSETTA_WEBAPP_HOST` is used for let the agent know where to connect, and it is differentiated from `ROSETTA_HOST` as it can be on an internal Docker network. It is indeed defaulted to the `webapp` container IP address.
### User types
In Rosetta there are two user types: standard users and power users. Their type is set in their user profile, and only power users can:
- set custom task passwords
- choose task access methods other than the default one (bypassing HTTP proxy + auth)
......@@ -121,6 +131,15 @@ Run Web App unit tests (with Rosetta running)
$ rosetta/logs webapp server
$ rosetta/test
### Computing resources requirements
Ensure that computing resource have:
- a container engine or wms available (of course);
- Python installed and callable with the "python" executable or the agent will fail;
- Bash as default shell for ssh-based computign resources.
## Known issues
......
......@@ -60,15 +60,15 @@ services:
- ROSETTA_LOG_LEVEL=DEBUG
#- ROSETTA_WEBAPP_HOST=localhost # Internal, for the agent
#- ROSETTA_WEBAPP_PORT=8080 # Internal, for the agent
#- LOCAL_DOCKER_REGISTRY_HOST=
#- LOCAL_DOCKER_REGISTRY_PORT=5000
#- ROSETTA_REGISTRY_HOST=
#- ROSETTA_REGISTRY_PORT=5000
#- DJANGO_EMAIL_APIKEY=""
#- DJANGO_EMAIL_FROM="Rosetta Platform <notifications@rosetta.platform>"
#- DJANGO_SECRET_KEY=""
- TASK_PROXY_HOST=localhost
- TASK_TUNNEL_HOST=localhost
- ROSETTA_HOST=localhost
- REGISTRY_HOST=proxy # Use same value as ROSETTA_HOST for production or to use "real" computing resurces
- REGISTRY_HOST=proxy:5000 # Use same value as ROSETTA_HOST for production or to use "real" computing resurces
ports:
- "8080:8080"
- "7000-7020:7000-7020"
......
......@@ -25,7 +25,7 @@ RUN apt-get install net-tools iproute2 iputils-ping -y
#------------------------
# Scienceuser user
# Rosetta user
#------------------------
# Add group. We chose GID 65527 to try avoiding conflicts.
......@@ -90,47 +90,6 @@ RUN mkdir /prestartup
COPY prestartup.py /
#----------------------
# Singularity
#----------------------
# Dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libssl-dev \
uuid-dev \
libgpgme11-dev \
squashfs-tools \
libseccomp-dev \
pkg-config \
cryptsetup-bin \
wget
# Install GO
RUN cd /tmp && wget https://dl.google.com/go/go1.11.linux-amd64.tar.gz
RUN cd /tmp && tar -zxf go1.11.linux-amd64.tar.gz && mv go /usr/local
ENV GOROOT=/usr/local/go
ENV GOPATH=/root/go
ENV PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
COPY singularity-3.4.1.tar.gz /tmp
# Install Singularity
RUN mkdir -p /usr/local/var/singularity/mnt && \
mkdir -p $GOPATH/src/github.com/sylabs && \
cd $GOPATH/src/github.com/sylabs && \
mv /tmp/singularity-3.4.1.tar.gz ./ && \
tar -xzvf singularity-3.4.1.tar.gz
RUN cd $GOPATH/src/github.com/sylabs/singularity && \
./mconfig -p /usr/local && \
make -C builddir && \
make -C builddir install
# Build test image
RUN mkdir /singularity_images && chmod 777 /singularity_images
COPY testimage.def /singularity_images/testimage.def
RUN singularity build /singularity_images/testimage.simg /singularity_images/testimage.def
#----------------------
# Entrypoint
......
FROM registry:2
RUN set -ex && apk --no-cache add sudo bash
#------------------------
# Rosetta user
#------------------------
# Add group. We chose GID 65527 to try avoiding conflicts.
RUN addgroup -g 65527 rosetta
# Add user. We chose UID 65527 to try avoiding conflicts.
RUN adduser rosetta -D -h /rosetta -u 65527 -G rosetta -s /bin/bash
# Add rosetta user to sudoers
RUN adduser rosetta wheel
# Passwordless sudo
RUN sed -e 's;^# \(%wheel.*NOPASSWD.*\);\1;g' -i /etc/sudoers
\ No newline at end of file
......@@ -13,6 +13,7 @@
ProxyPass / http://webapp:8080/
ProxyPassReverse / http://webapp:8080/
AllowEncodedSlashes NoDecode
</VirtualHost>
......@@ -32,9 +33,9 @@
ServerName ${ROSETTA_HOST}
ProxyPass / http://webapp:8080/
ProxyPassReverse / http://webapp:8080/
AllowEncodedSlashes NoDecode
SSLEngine on
SSLCertificateFile /root/certificates/rosetta_platform/rosetta_platform.crt
SSLCertificateKeyFile /root/certificates/rosetta_platform/rosetta_platform.key
SSLCACertificateFile /root/certificates/rosetta_platform/rosetta_platform.ca-bundle
......
FROM rosetta/base
MAINTAINER Stefano Alberto Russo <stefano.russo@gmail.com>
#----------------------
# Singularity
#----------------------
# Dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libssl-dev \
uuid-dev \
libgpgme11-dev \
squashfs-tools \
libseccomp-dev \
pkg-config \
cryptsetup-bin \
wget
# Install GO
RUN cd /tmp && wget https://dl.google.com/go/go1.11.linux-amd64.tar.gz
RUN cd /tmp && tar -zxf go1.11.linux-amd64.tar.gz && mv go /usr/local
ENV GOROOT=/usr/local/go
ENV GOPATH=/root/go
ENV PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
COPY singularity-3.4.1.tar.gz /tmp
# Install Singularity
RUN mkdir -p /usr/local/var/singularity/mnt && \
mkdir -p $GOPATH/src/github.com/sylabs && \
cd $GOPATH/src/github.com/sylabs && \
mv /tmp/singularity-3.4.1.tar.gz ./ && \
tar -xzvf singularity-3.4.1.tar.gz
RUN cd $GOPATH/src/github.com/sylabs/singularity && \
./mconfig -p /usr/local && \
make -C builddir && \
make -C builddir install
# Build test image
RUN mkdir /singularity_images && chmod 777 /singularity_images
COPY testimage.def /singularity_images/testimage.def
RUN singularity build /singularity_images/testimage.simg /singularity_images/testimage.def
#----------------------
# Slurm
#----------------------
# Install Slurm
RUN apt-get -y install slurm-wlm
......@@ -23,6 +69,12 @@ COPY slurm.conf /etc/slurm-llnl/slurm.conf
RUN ln -s /var/lib/slurm-llnl /var/lib/slurm-wlm
RUN ln -s /var/log/slurm-llnl /var/log/slurm-wlm
#----------------------
# Test user and
# prestartup
#----------------------
# Add testuser user
RUN useradd testuser
RUN mkdir -p /home/testuser/.ssh
......
FROM quay.io/podman/stable:v3.2.3
#RUN dnf repolist
#RUN dnf update --refresh
# This is necessary due to some base image permission errors.
RUN chown -R podman:podman /home/podman
# Change user
RUN usermod -l testuser podman
RUN usermod -d /home/testuser testuser
RUN ln -s /home/podman /home/testuser
RUN groupmod -n testuser podman
# Change user, from podman to rosetta
RUN usermod -l rosetta podman
RUN usermod -d /rosetta rosetta
RUN ln -s /home/podman /rosetta
RUN groupmod -n rosetta podman
# Replace uid/gid mapping from podman to testuser user
# Replace uid/gid mapping from podman to rosetta user
COPY subuid /etc/subuid
COPY subgid /etc/subgid
#RUN dnf repolist
#RUN dnf update --refresh
RUN dnf install -y docker singularity openssh-server
RUN ssh-keygen -A
# Authorized keys for rosetta
RUN mkdir /rosetta/.ssh
COPY keys/id_rsa.pub /rosetta/.ssh/authorized_keys
# Add rosetta user to sudoers
RUN usermod -aG wheel rosetta
# Passwordless sudo (for everyone)
RUN sed -e 's;^# \(%wheel.*NOPASSWD.*\);\1;g' -i /etc/sudoers
# Add testuser user
RUN groupadd -g 1001 testuser
RUN useradd testuser -d /home/testuser -u 1001 -g 1001 -m -s /bin/bash
# Authorized keys for testuser
RUN mkdir /home/testuser/.ssh
COPY keys/id_rsa.pub /home/testuser/.ssh/authorized_keys
RUN dnf install -y python wget
# Install iputils (ping)
RUN dnf install -y iputils
# Install Docker, Singularity, various utilities including iputils (for ping) and openssh-clients (for scp)
RUN dnf install -y docker singularity openssh-server python wget iputils openssh-clients
# Copy registries.conf to allow insecure access to dregistry
COPY registries.conf /etc/containers/registries.conf
# Generate host keys
RUN ssh-keygen -A
#----------------------
# Entrypoint
#----------------------
# Copy registries.conf to allow insecure access to internal/dev registries
COPY registries.conf /etc/containers/registries.conf
# Copy entrypoint
COPY entrypoint.sh /
......
......@@ -12,6 +12,20 @@ chmod 777 /dev/net/tun
#PROXY_IP=$(ping proxy -c1 | head -n1 | cut -d '(' -f2 | cut -d')' -f1)
#echo "$PROXY_IP rosetta.platform" >> /etc/hosts
# Create shared data directories
mkdir -p /shared/scratch
chmod 777 /shared/scratch
mkdir -p /shared/data/shared
chmod 777 /shared/data/shared
mkdir -p /shared/data/users
chown rosetta:rosetta /shared/data/users/
mkdir -p /shared/data/users/testuser
chown testuser:testuser /shared/data/users/testuser
#---------------------
# Entrypoint command
#---------------------
......
......@@ -85,6 +85,10 @@ short-name-mode="enforcing"
location = "dregistry:5000"
insecure = true
[[registry]]
location = "proxy:5000"
insecure = true
[[registry]]
location = "rosetta.platform:5000"
insecure = true
......
testuser:10000:5000
\ No newline at end of file
rosetta:10000:5000
\ No newline at end of file
testuser:10000:5000
\ No newline at end of file
rosetta:10000:5000
\ No newline at end of file
from mozilla_django_oidc.auth import OIDCAuthenticationBackend
from mozilla_django_oidc.views import OIDCAuthenticationCallbackView
from .core_app.utils import finalize_user_creation
from django.http import HttpResponseRedirect
# Setup logging
import logging
......@@ -18,9 +20,27 @@ class RosettaOIDCAuthenticationBackend(OIDCAuthenticationBackend):
return user
def get_userinfo(self, access_token, id_token, payload):
# Payload must contain the "email" key
return payload
class RosettaOIDCAuthenticationCallbackView(OIDCAuthenticationCallbackView):
def login_success(self):
# Call parent login_success but do not return
super(RosettaOIDCAuthenticationCallbackView, self).login_success()
logger.debug('Trying to get cookie-based post login redirect')
post_login_page = self.request.COOKIES.get('post_login_redirect')
if post_login_page:
logger.debug('Got "%s" and redirecting', post_login_page )
response = HttpResponseRedirect(post_login_page)
response.delete_cookie('post_login_redirect')
return response
else:
logger.debug('No cookie-based post login redirect found, redirecting to "%s"', self.success_url)
return HttpResponseRedirect(self.success_url)
......@@ -71,6 +71,9 @@ class ComputingManager(object):
# Call actual get task log logic
return self._get_task_log(task, **kwargs)
def is_configured_for(self, user):
return True
class StandaloneComputingManager(ComputingManager):
......@@ -82,8 +85,14 @@ class ClusterComputingManager(ComputingManager):
class SSHComputingManager(ComputingManager):
# SSH-f + keys utils here
pass
def is_configured_for(self, user):
try:
get_ssh_access_mode_credentials(self.computing, user)
except:
return False
else:
return True
......@@ -240,7 +249,7 @@ class SSHStandaloneComputingManager(StandaloneComputingManager, SSHComputingMana
else:
raise NotImplementedError('Accessing a storage with ssh+cli without going through its computing resource is not implemented')
if '$USER' in expanded_base_path:
expanded_base_path = expanded_base_path.replace('$USER', self.task.user.name)
expanded_base_path = expanded_base_path.replace('$USER', task.user.username)
# Expand the bind_path
expanded_bind_path = storage.bind_path
......@@ -250,7 +259,7 @@ class SSHStandaloneComputingManager(StandaloneComputingManager, SSHComputingMana
else:
raise NotImplementedError('Accessing a storage with ssh+cli without going through its computing resource is not implemented')
if '$USER' in expanded_bind_path:
expanded_bind_path = expanded_bind_path.replace('$USER', self.task.user.name)
expanded_bind_path = expanded_bind_path.replace('$USER', task.user.username)
# Add the bind
if not binds:
......@@ -300,7 +309,7 @@ class SSHStandaloneComputingManager(StandaloneComputingManager, SSHComputingMana
else:
raise NotImplementedError('Accessing a storage with ssh+cli without going through its computing resource is not implemented')
if '$USER' in expanded_base_path:
expanded_base_path = expanded_base_path.replace('$USER', self.task.user.name)
expanded_base_path = expanded_base_path.replace('$USER', task.user.username)
# Expand the bind_path
expanded_bind_path = storage.bind_path
......@@ -310,7 +319,7 @@ class SSHStandaloneComputingManager(StandaloneComputingManager, SSHComputingMana
else:
raise NotImplementedError('Accessing a storage with ssh+cli without going through its computing resource is not implemented')
if '$USER' in expanded_bind_path:
expanded_bind_path = expanded_bind_path.replace('$USER', self.task.user.name)
expanded_bind_path = expanded_bind_path.replace('$USER', task.user.username)
# Add the bind
if not binds:
......@@ -324,12 +333,12 @@ class SSHStandaloneComputingManager(StandaloneComputingManager, SSHComputingMana
run_command = 'ssh -o LogLevel=ERROR -i {} -4 -o StrictHostKeyChecking=no {}@{} '.format(computing_keys.private_key_file, computing_user, computing_host)
run_command += '/bin/bash -c \'"rm -rf /tmp/{}_data && mkdir /tmp/{}_data && chmod 700 /tmp/{}_data && '.format(task.uuid, task.uuid, task.uuid)
run_command += 'wget {}/api/v1/base/agent/?task_uuid={} -O /tmp/{}_data/agent.py &> /dev/null && export TASK_PORT=\$(python /tmp/{}_data/agent.py 2> /tmp/{}_data/task.log) && '.format(webapp_conn_string, task.uuid, task.uuid, task.uuid, task.uuid)
run_command += '{} {} run -p \$TASK_PORT:{} {} {} {} '.format(prefix, container_engine, task.container.interface_port, authstring, varsstring, binds)
run_command += 'exec nohup {} {} run -p \$TASK_PORT:{} {} {} {} '.format(prefix, container_engine, task.container.interface_port, authstring, varsstring, binds)
if container_engine == 'podman':
run_command += '--network=private --uts=private '
run_command += '--network=private --uts=private --userns=keep-id '
#run_command += '-d -t {}/{}:{}'.format(task.container.registry, task.container.image_name, task.container.image_tag)
run_command += '-h task-{} -d -t {}/{}:{}'.format(task.short_uuid, task.container.registry, task.container.image_name, task.container.image_tag)
run_command += '"\''
run_command += '-h task-{} -t {}/{}:{}'.format(task.short_uuid, task.container.registry, task.container.image_name, task.container.image_tag)
run_command += '&>> /tmp/{}_data/task.log & echo \$!"\''.format(task.uuid)
else:
raise NotImplementedError('Container engine {} not supported'.format(container_engine))
......@@ -376,7 +385,7 @@ class SSHStandaloneComputingManager(StandaloneComputingManager, SSHComputingMana
stop_command = 'ssh -o LogLevel=ERROR -i {} -4 -o StrictHostKeyChecking=no {}@{} \'/bin/bash -c "{}"\''.format(computing_keys.private_key_file, computing_user, computing_host, internal_stop_command)
out = os_shell(stop_command, capture=True)
if out.exit_code != 0:
if ('No such process' in out.stderr) or ('No such container' in out.stderr):
if ('No such process' in out.stderr) or ('No such container' in out.stderr) or ('no container' in out.stderr) or ('missing' in out.stderr):
pass
else:
raise Exception(out.stderr)
......@@ -402,8 +411,9 @@ class SSHStandaloneComputingManager(StandaloneComputingManager, SSHComputingMana
internal_view_log_command = 'cat /tmp/{}_data/task.log'.format(task.uuid)
elif container_engine in ['docker','podman']:
# TODO: remove this hardcoding
prefix = 'sudo' if (computing_host == 'slurmclusterworker' and container_engine=='docker') else ''
internal_view_log_command = '{} {} logs {}'.format(prefix,container_engine,task.id)
#prefix = 'sudo' if (computing_host == 'slurmclusterworker' and container_engine=='docker') else ''
#internal_view_log_command = '{} {} logs {}'.format(prefix,container_engine,task.id)
internal_view_log_command = 'cat /tmp/{}_data/task.log'.format(task.uuid)
else:
raise NotImplementedError('Container engine {} not supported'.format(container_engine))
......@@ -493,7 +503,7 @@ class SlurmSSHClusterComputingManager(ClusterComputingManager, SSHComputingManag
else:
raise NotImplementedError('Accessing a storage with ssh+cli without going through its computing resource is not implemented')
if '$USER' in expanded_base_path:
expanded_base_path = expanded_base_path.replace('$USER', self.task.user.name)
expanded_base_path = expanded_base_path.replace('$USER', task.user.username)
# Expand the bind_path
expanded_bind_path = storage.bind_path
......@@ -503,7 +513,7 @@ class SlurmSSHClusterComputingManager(ClusterComputingManager, SSHComputingManag
else:
raise NotImplementedError('Accessing a storage with ssh+cli without going through its computing resource is not implemented')
if '$USER' in expanded_bind_path:
expanded_bind_path = expanded_bind_path.replace('$USER', self.task.user.name)
expanded_bind_path = expanded_bind_path.replace('$USER', task.user.username)
# Add the bind
if not binds:
......
......@@ -132,5 +132,8 @@ def private_view(wrapped_view):
else:
log_user_activity("DEBUG", "Redirecting to login since not authenticated", request)
return HttpResponseRedirect('/login')
logger.debug('Setting cookie-based post login redirect to "%s"', request.build_absolute_uri())
response = HttpResponseRedirect('/login')
response.set_cookie('post_login_redirect', request.build_absolute_uri())
return response
return private_view_wrapper
......@@ -280,8 +280,20 @@ to provide help, news and informations on your deployment. Or you can just ignor
auth_mode = 'user_keys',
wms = None,
conf = {'host': 'standaloneworker'},
container_engines = ['singularity','podman'])
container_engines = ['podman','singularity'])
# Demo standalone platform computing plus conf
demo_singlenode_computing = Computing.objects.create(name = 'Demo Standalone Platform',
description = 'A demo standalone computing resource access as platform.',
type = 'standalone',
arch = 'amd64',
supported_archs = ['386'],
access_mode = 'ssh+cli',
auth_mode = 'platform_keys',
wms = None,
conf = {'host': 'standaloneworker', 'user': 'rosetta'},
container_engines = ['podman','singularity'])
# Add testuser extra conf for this computing resource
testuser.profile.add_extra_conf(conf_type = 'computing_user', object=demo_singlenode_computing, value= 'testuser')
......@@ -324,7 +336,7 @@ to provide help, news and informations on your deployment. Or you can just ignor
for computing in demo_computing_resources:
# Demo shared computing plus conf
# Demo shared storage
Storage.objects.create(computing = computing,
access_through_computing = True,
name = 'Shared',
......@@ -334,7 +346,7 @@ to provide help, news and informations on your deployment. Or you can just ignor
base_path = '/shared/data/shared',
bind_path = '/storages/shared')
# Demo shared computing plus conf
# Demo personal storage
Storage.objects.create(computing = computing,
access_through_computing = True,
name = 'Personal',
......@@ -343,7 +355,24 @@ to provide help, news and informations on your deployment. Or you can just ignor
auth_mode = 'user_keys',
base_path = '/shared/data/users/$SSH_USER',
bind_path = '/storages/personal')
try:
demo_standalone_computing = Computing.objects.get(name='Demo Standalone Platform')
demo_computing_resources.append(demo_standalone_computing)
# Demo personal storage
Storage.objects.create(computing = computing,
access_through_computing = True,
name = 'Personal',
type = 'generic_posix',
access_mode = 'ssh+cli',
auth_mode = 'user_keys',
base_path = '/shared/data/users/$SSH_USER',
bind_path = '/storages/personal')
except:
pass
......
# Generated by Django 2.2.1 on 2022-04-09 18:13
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('core_app', '0031_container_env_vars'),
]
operations = [
migrations.AddField(
model_name='storage',
name='browsable',
field=models.BooleanField(default=False, verbose_name='Browsable in the file manager?'),
),
migrations.AlterField(
model_name='storage',
name='bind_path',
field=models.CharField(blank=True, max_length=4096, null=True, verbose_name='Bind path'),
),
]
# Generated by Django 2.2.1 on 2022-04-10 15:31
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('core_app', '0032_auto_20220409_1813'),
]
operations = [
migrations.AlterField(
model_name='storage',
name='browsable',
field=models.BooleanField(default=True, verbose_name='Browsable in the file manager?'),
),
]