Blog

  • WIndows-Linux-Serial-Communication-python

    LINUX_WIN_PYTHON_SERIAL_COMMUNICATION

    • This program is for basic serial communication. It supports input and output.
    • Supported OS WIN, LINUX, MAC (coming soon…)

    Preperlation

    > pip3 install -r requirements.txt
    
    - If it does not work, enter the following command and install it again.
    
    > pip3 uninstall -r requirements.txt
    

    Run UART

    usage: UART.py [-h] -p /dev/ttyUSB1 [-buad 9600] [-bsize 8] [-prty N] [-sbits 1] [-tout 1] [-xonxoff] [-rtscts] [-wtout 1] [-dsrdtr] [-ibtout None] [-exclusive True]
                   [-w WATCH_PATTERN [WATCH_PATTERN ...]] [-e]
    
    optional arguments:
      -h, --help
              show this help message and exit
      -p /dev/ttyUSB1, --port /dev/ttyUSB1
              Enter device path for uart connection. i.e (/dev/ttyUSB1, COM30, etc...)
      -buad 9600, --baudrate 9600
              Baud rate such as 9600 or 115200 etc. (Default : 115200)
      -bsize 8, --bytesize 8
              Number of data bits. (Default : 8)
              choices [5, 6, 7, 8]
      -prty N, --parity N
              Enable parity checking. (Default : N)
              choices=[N, E, O, M, S]
      -sbits 1, --stopbits 1
              Number of stop bits. (Default : 1)
              choices=[1, 1.5, 2]
      -tout 1, --timeout 1
              Set a read timeout value in seconds. (Default : None)
      -xonxoff, --xonxoff
              Enable software flow control. (Default : False)
              if this feature activated xonxoff is True state.
      -rtscts, --rtscts
              Enable hardware (RTS/CTS) flow control. (Default : False)
              if this feature activated rtscts is True state.
      -wtout 1, --write_timeout 1
              Set a write timeout value in seconds. (Default : None)
      -dsrdtr, --dsrdtr
              Enable hardware (DSR/DTR) flow control. (Default : False)
              if this feature activated dsrdtr is True state.
      -ibtout None, --inter_byte_timeout None
               Inter-character timeout. (Default : None)
      -exclusive True, --exclusive True
              Set exclusive access mode (POSIX only).
              A port cannot be opened in exclusive access mode
              if it is already open in exclusive access mode.(Default : None)
      -w WATCH_PATTERN [WATCH_PATTERN ...], --watch WATCH_PATTERN [WATCH_PATTERN ...]
              Enter the Watch pattern what you want. entered pattern ignores case.
              Patterns are separated by spaces.                        
              e.g. -w ASSERT WARNING ERROR etc...
      -e, --watch_end
              When this feature is activated, it is automatically terminated when the pattern is matched.
    
    i.e) > LINUX : 'sudo python3 UART.py -p /dev/ttyUSB1'
         > WIN   : 'python3 UART.py -p COM27'
    
    See help for more information.
    

    Log Directory

    • Windows
      • 'C:\Users\[USERNAME]\log'
    • Ubuntu
      • '~/log'
      • Warning : If you want to view the log, run it as administrator privileges.

    Visit original content creator repository

  • scantailor-experimental

    ScanTailor-Experimental

    Based of Scan Tailor – scantailor.org

    ScanTailor logo from scantailor.org

    About

    Scan Tailor is an interactive post-processing tool for scanned pages.
    It performs operations such as:

    You give it raw scans, and you get pages ready to be printed or assembled into a PDF
    or DJVU file. Scanning, optical character recognition,
    and assembling multi-page documents are out of scope of this project.

    Scan Tailor is Free Software (which is more than just freeware).
    It’s written in C++ with Qt and released under the General Public License version 3.
    We develop both Windows and GNU/Linux versions.

    History and Future

    This project started in late 2007 and by mid 2010 it reached production quality.

    In 2014, the original developer Joseph Artsimovich stepped aside,
    and Nate Craun (@ncraun)
    took over as the new maintainer.

    For information on contributing and the longstanding plan for the project, please see the
    Roadmap wiki entry.

    For any suggested changes or bugs, please consult the Issues tab.

    Usage

    Scan Tailor is being used not just by enthusiasts, but also by libraries and other institutions.
    Scan Tailor processed books can be found on Google Books and the Internet Archive.

    • Prolog for Programmers. The 47.3MB pdf is the original,
      and the 3.1MB pdf is after using Scan Tailor. The OCR, Chapter Indexing, JBIG2 compression, and PDF Binding were not
      done with Scan Tailor, but all of the scanned image cleanup was. [1]
    • Oakland Township: Two Hundred Years
      by Stuart A. Rammage (also available: volumes 2, 3, 4.1, 4.2, 5.1, and 5.2) [2]
    • Herons and Cobblestones: A History of Bethel and the Five Oaks Area of Brantford Township,
      County of Brant by the Grand River Heritage Mines Society [2]

    Installation and Tips

    Scanning Tips,
    Quick-Start-Guide, and complete
    Usage Guide, including installation information
    (via the installer or
    building from from source)
    can be found in the wiki!

    Installation on Windows

    On Windows 10 1809 or higher to install Scantailor-Experimental just use command:

    winget install "Scantailor-Experimental"

    You can also download binaries from Release page.

    Additional Links

    Visit original content creator repository

  • iplan

    iplan

    Dependencies

    $ pip install python-dotenv
    $ pip install django-simple-captcha
    $ pip install django-extensions
    $ pip install python-dateutil
    $ pip install requests
    $ pip install requests_oauthlib
    

    Define the environment file for python-dotenv in project settings

    In iplan/prod_settings.py file, define the variable ENV_FILE which is the location of the environment file settings for your dotenv package:

    ENV_FILE = str(Path(BASE_DIR)) + 'environment_file_name'
    

    Migrate the database

    $ python manage.py makemigrations planner
    $ python manage.py migrate planner
    

    Populate the database

    $ python manage.py runscript populate
    

    Run the server

    $ python manage.py runserver
    

    Deploy on AWS

    1. Clone project at /opt/bitnami/projects with git clone https://github.com/mchesler613/iplan.git
    2. cd to the iplan directory.
    3. Install the dependencies above with sudo.
    4. Edit the project, iplan/wsgi.py file:

    import os
    import sys
    from django.core.wsgi import get_wsgi_application
    
    sys.path.append('/opt/bitnami/projects/iplan')
    os.environ['PYTHON_EGG_CACHE'] = '/opt/bitnami/projects/iplan/egg_cache'
    os.environ['DJANGO_SETTINGS_MODULE'] = 'iplan.settings'
    
    application = get_wsgi_application() 
    
    1. Make a conf directory at the root project.
    $ mkdir conf
    
    1. Edit the conf/httpd-app.conf file.

    <IfDefine !DJANGOSTACK_LOADED>
      Define IS_DJANGOSTACK_LOADED
    WSGIDaemonProcess wsgi-djangostack   processes=2 threads=15    display-name=%{GROUP}
    </IfDefine>
    
    <Directory "/opt/bitnami/projects/iplan/iplan">
        Options +MultiViews
        AllowOverride All
        <IfVersion >= 2.3>
    Require all granted
        </IfVersion>
    
    WSGIProcessGroup wsgi-djangostack
    
    WSGIApplicationGroup %{GLOBAL}
    </Directory>
    
    Alias /iplan/static "/opt/bitnami/python/lib/python3.8/site-packages/django/contrib/admin/s
    tatic/"
    
    WSGIScriptAlias /iplan '/opt/bitnami/projects/iplan/iplan/wsgi.py'
    
    1. Edit the conf/httpd-prefix.conf file.
    Include '/opt/bitnami/projects/iplan/conf/httpd-app.conf'
    
    1. Edit /opt/bitnami/apache2/conf/bitnami/bitnami.conf as sudo.

    <VirtualHost _default_:80>
        WSGIScriptAlias /iplan /opt/bitnami/projects/iplan/iplan/wsgi.py
        Alias /static /opt/bitnami/projects/iplan/static
        
        <Directory /opt/bitnami/projects/iplan/static>ration not p
            Require all granted
        </Directory>
        
        <Directory /opt/bitnami/projects/iplan>
            AllowOverride all
            Require all granted
            Options FollowSymlinks
        </Directory>
     
        DocumentRoot /opt/bitnami/projects/iplan
    </VirtualHost>
     
    Include "/opt/bitnami/apache/conf/bitnami/bitnami-ssl.conf"
    
    1. Edit the iplan/prod_settings.py file for production.

    # Static File settings
    STATIC_URL = '/static/'
    STATIC_ROOT = '/opt/bitnami/projects/iplan/static'
    
    # Redirect to URL after login (defaults to /accounts/profile)
    LOGIN_REDIRECT_URL = '/iplan/planner/'
    
    # Redirect to URL for login (defaults to /accounts/login)
    LOGIN_URL = '/planner/login'
    
    # WSGI settings
    ALLOWED_HOSTS = ['yourdjangosite.com']
    
    # Settings for Django send_mail()
    SITE_URL = 'yourdjangosite.com'
    ENV_FILE=str(Path(BASE_DIR)) + 'your_environment_file_name'
    from dotenv import dotenv_values
    myvars = dotenv_values(ENV_FILE)
    EMAIL_HOST = myvars['EMAIL_HOST'] 
    EMAIL_PORT = 587
    EMAIL_HOST_USER = myvars['EMAIL_HOST_USER'] 
    EMAIL_USE_SSL = False
    EMAIL_USE_TSL = True
    EMAIL_HOST_PASSWORD = myvars['EMAIL_HOST_PASSWORD']
    EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
    
    1. Add the sensitive variables in the file defined in the variable ENV_FILE in settings.py and prod_settings.py.

    EMAIL_HOST='your_email_host'
    EMAIL_HOST_USER='your_email_host_user'
    EMAIL_HOST_PASSWORD='your_email_host_user_password'
    
    1. Change permissions on database file, iplan/db.sqlite3

    sudo chmod g+w . ./db.sqlite3
    sudo chgrp daemon . ./db.sqlite3
    
    1. Start the Apache server
    sudo /opt/bitnami/ctlscript.sh restart apache
    
    1. Point browser to yourdjangosite.com/iplan/planner
    2. Create a superuser
    $ python manage.py createsuperuser
    
    1. Test the admin login at http://yourdangosite.com/iplan/planner/admin and login as superuser

    Server Timezone

    This version of iPlan assumes everyone is on the same timezone. To set the timezone for your server, visit this link for instructions.

    Cron jobs

    • Setting cronjobs. The file to edit is /etc/cron.d/your_file_name. See this source.
    • Use the absolute path for the commands inside your script and crontab entry. Instead of
    python manage.py [command]
    

    use

    /opt/bitnami/python/bin/python manage.py [command]
    

    For example:

    30 0 * * * bitnami /opt/bitnami/projects/iplan/check.sh
    

    Static Files

    • Add a definition for STATIC_URL and STATIC_ROOT in iplan/prod_settings.py. For example:

    # Static File settings
    STATIC_URL = '/static/'
    STATIC_ROOT = '/opt/bitnami/projects/iplan/static'

    The run python manage.py collectstatic to copy the Admin and other static files to the defined STATIC_ROOT directory.

    • Add an alias in /opt/bitnami/apache2/conf/bitnami/bitnami.conf to set the path to the static files. For example:

        Alias /static /opt/bitnami/projects/iplan/static
        
        <Directory /opt/bitnami/projects/iplan/static>
            Require all granted
        </Directory>
    

    Visit original content creator repository

  • PantherExtension

    Panther Mink Extension

    Mink extension for controlling Chrome | Firefox | Selenium thanks to Symfony Panther.

    Foreword:

    This extension is experimental (even if stable at 95%), some features may be missing.

    Installation:

    First, you need to install Symfony Panther and it’s required dependencies, then:

    composer require guikingone/panther-extension

    Usage:

    default:
      suites:
        default:
          contexts:
            - PantherExtension\Context\PantherContext:
            - PantherExtension\Context\WaitContext:
            # Your contexts
    
      extensions:
        PantherExtension\Extension\PantherExtension: ~
        Behat\MinkExtension:
          browser_name: chrome
          base_url: http://localhost
          sessions:
            default:
              panther:
                driver: 'chrome' # Or 'firefox', 'selenium', 'chrome' is the default value

    WaitContext has been introduced in 0.4

    If you need to use Selenium, just adapt the session configuration:

    # ...
    
      extensions:
        PantherExtension\Extension\PantherExtension: ~
        Behat\MinkExtension:
          browser_name: chrome
          base_url: http://localhost
          sessions:
            default:
              panther:
                driver: 'selenium'
                selenium:
                  hub_url: 'http://127.0.0.1:4444/wd/hub'

    Here’s a simple example using a POC project which call API-Platform website

    Feature:
      As a newbie in API-Platform, I want to document myself in many features
    
      Scenario: I should be able to see the main documentation                           
        Given I am on "https://github.com/"                                                                
        And I should see "REST and GraphQL framework to build modern API-driven projects"
    
      Scenario: I should be able to see the main documentation                                           
        Given I am on "https://github.com/"                                                                                
        And I go to "/docs/distribution/"                                                                
        Then I should see "API Platform is the most advanced API platform, in any framework or language."
    
      Scenario: I should be able to document myself about GraphQL support
        Given I am on "https://github.com/"                                                
        And I follow "Get started"                                       
        When I follow "Adding GraphQL Support"                           
        Then I should be on "/docs/distribution/#adding-graphql-support" 
        Then I should see "You now have a GraphQL API!"                  
    
      Scenario: I should be able to document myself about GraphQL support thanks to the search field
        Given I am on "https://github.com/"                                                                           
        When I fill in "SEARCH..." with "GraphQL"                                                   
        And I wait for "#algolia-autocomplete-listbox-0"                                            
        Then I should see "Documentation"                                                           
        And I should see "Search by"                                                                
        And I should see "Enabling GraphQL"                                                         
    
      Scenario: I should be able to test the demo                  
        Given I am on "https://github.com/"                                          
        And I follow "Demo"                                        
        Then I should be on "https://demo-client.api-platform.com/"
        When I follow "API"                                        
        Then I should be on "https://demo.api-platform.com/"       
    
      Scenario: I should be able to test the demo                                         
        Given I am on "https://github.com/"                                                                 
        And I follow "Community"                                                          
        And I create a new client "test" using the "chrome" driver                        
        Then I switch to client "test"                                                    
        And I go to "https://github.com/"                                                                   
        Then I should see "REST and GraphQL framework to build modern API-driven projects"
        Then I remove the client "test"                                                   
        Then I should see "API Platform's community"                                      
    
    6 scenarios (6 passed)
    29 steps (29 passed)
    0m28.61s (20.63Mb)

    Documentation

    The full documentation can be found here

    CI usage

    Please refer to Symfony Panther documentation about using it in CI environments.

    Development

    The project can be launched using:

    make boot

    Every test can be launched using:

    make tests

    For more commands or help, please use:

    make

    Contributing

    Just fork this repo and submit a new PR!

    Visit original content creator repository

  • Project-1—Automated-ELK-Stack-Deployment

    Automated ELK Stack Deployment

    The files in this repository were used to configure the network depicted below.

    Network Diagram

    These files have been tested and used to generate a live ELK deployment on Azure. They can be used to either recreate the entire deployment pictured above. Alternatively, select portions of the Configuration and YAML files may be used to install only certain pieces of it, such as Filebeat.

    This document contains the following details:

    • Description of the Topology
    • Access Policies
    • ELK Configuration
    • Beats in Use
    • Machines Being Monitored
    • How to Use the Ansible Build

    Description of the Topology

    The main purpose of this network is to expose a load-balanced and monitored instance of DVWA, the D*mn Vulnerable Web Application.

    Load balancing ensures that the application will be highly available, in addition to restricting inbound access to the network.

    What aspect of security do load balancers protect?

    • Load balancers are designed to take a load of traffic and distribute it across multiple resources preventing servers to overload.
    • Load balancers play an important role in security by defending against distributed denial-of-service (DDoS) attacks.

    What is the advantage of a jump box?

    • Jump box virtual machine is exposed on the public network to withstand malicious threats and attacks. It is also used to manage other systems and hardens security, it is treated as a single entryway to a server group from within your security zone.
    • The advantage of having a jump box is that it limits access to servers that are inaccessible over the network.

    Integrating an ELK server allows users to easily monitor the vulnerable VMs for changes to

    What does Filebeat watch for?

    • Filebeat: collects data and logs about the file system.

    What does Metricbeat record?

    • Metricbeat: collects machine metrics and statisics, such as uptime.

    The configuration details of each machine may be found below.

    Name Function IP Address Operating System Server
    Jump Box Gateway 104.43.255.56; 10.0.0.1 Linux Ubuntu Server 18.04 LTS
    Web-1 VM DVWA Server 10.0.0.5 Linux Ubuntu Server 18.04 LTS
    Web-2 VM DVWA Server 10.0.0.6 Linux Ubuntu Server 18.04 LTS
    Web-3 VM DVWA Server 10.0.0.7 Linux Ubuntu Server 18.04 LTS
    ELK Server Monitoring 20.242.105.231; 10.1.0.7 Linux Ubuntu Server 18.04 LTS

    Note: In addition to above, Azure has provisioned a load balancer in front of all the machines except for Jump-Box. The load balancer’s target are organized into the following availability zones: Web-1, Web-2, Web-3


    Access Policies

    The machines on the internal network are not exposed to the public Internet.

    Only the Jump Box Provisioner machine can accept connections from the Internet. Access to this machine is only allowed from the following IP addresses:

    • Add whitelisted IP addresses: Local Admin IP, Workstation (My Personal IP)

    Machines within the network can only be accessed by Workstation (My IP) and Jump Box Provisioner.

    Which machine did you allow to access your ELK VM?

    • Jump Box Provisioner IP: 10.0.0.4 via SSH Port 22

    What was its IP address?

    • Local Admin IP, Workstation (My Personal IP) via port TCP 5601

    A summary of the access policies in place can be found in the table below.

    Name Publicly Accessible Allowed IP Addresses Port Server
    Jump Box Yes Local Admin IP SSH 22 Ubuntu Server 18.04 LTS
    Web-1 VM No 10.0.0.5 SSH 22 Ubuntu Server 18.04 LTS
    Web-2 VM No 10.0.0.6 SSH 22 Ubuntu Server 18.04 LTS
    Web-3 VM No 10.0.0.7 SSH 22 Ubuntu Server 18.04 LTS
    Elk Server No Local Admin IP TCP 5601 Ubuntu Server 18.04 LTS

    Elk Configuration

    Ansible was used to automate configuration of the ELK machine. No configuration was performed manually, which is advantageous because…

    What is the main advantage of automating configuration with Ansible?

    • Ansible is an open source tool with simple configuration management, cloud provisioning and application development.
    • Allows you to deploy YAML playbooks.
    Click here to view Steps on Creating an ELK Server.

    We will create an ELK server within a virtual network. Specifically we will:

    • Create a new vNet
    • Create a Peer Network Connection
    • Create a new VM
    • Create an Ansible Playbook
    • Downloading and Configuring the Container
    • Launch and Expose the Container

    Creating a New vNet

    1. Create a new vNet located in the same resouce group you have been using.

      • Make sure this vNet is located in a new region and not the same region as your other VM’s.

      • Leave the rest of the settings at default.

      • Notice, in this example that the IP addressing is automatically created a new network space of 10.1.0.0/16. If your network is different (10.1.0.0 or 10.3.0.0) it is ok as long as you accept the default settings. Azure automatically creates a network that will work.

    Create a Peer Network Connection

    1. Create a Peer network connection between your vNets. This will allow traffic to pass between you vNets and regions. This peer connection will make both a connection from your first vNet to your second vNet and a reverse connection from your second vNet back to your first vNet. This will allow traffic to pass in both directions.

      • Navigate to ‘Virtual Network’ in the Azure Portal.

      • Select your new vNet to view it’s details.

      • Under ‘Settings’ on the left side, select ‘Peerings’.

      • Click the + Add button to create a new Peering.

      • Make sure your new Peering has the following settings:

        • A unique name of the connection from your new vNet to your old vNet.

          • Elk-to-Red would make sense
        • Choose your original RedTeam vNet in the dropdown labeled ‘Virtual Network’. This is the network you are connecting to your new vNet and you should only have one option.

        • Name the resulting connection from your RedTeam Vnet to your Elk vNet.

          • Red-to-Elk would make sense
      • Leave all other settings at their defaults.

    The following screenshots displays the results of the new Peering connections with your ELK vNet to your old vNet

    Create a new VM

    1. Creating a new VM

      • Creating a new Ubuntu VM in your virtual network with the following configures:
      • VM must have at least 4GB of RAM.
      • IP address must be same as public IP address.
      • The VM must be added to the new region in which you created your new vNet and create a new basic network security group for it.
      • After creating the VM make sure that it works by connecting to it from your Jump-box using ssh username@jump.box.ip
         ssh RedAdmin@104.43.255.56
      • Check your Ansible container: sudo docker ps

      • Locate the container name: sudo docker container list -a

      • Start the container: sudo docker container start peaceful_borg

      • Attach the container: sudo docker attach peaceful_borg

      • Copy the SSH key from the Ansible container on your jump box: cat ~/.ssh/id_rsa.pub

      • Configure a new VM using that SSH key.

    Configuring Container

    1. Downloading and Configuring Container

      • Configure your hosts file inside ansible: cd /etc/ansible/ configure nano /etc/ansible/hosts and input the IP addresses of your VM with ansible_python_intrepreter=/usr/bin/python3

      • Create a Playbook that installs Docker and configures the container

      • Run the ELK playbook:

         ansible-playbook install-elk.yml

    The following screenshot displays the result of running ELK installation YML file.

    Creating ELK Playbook

    The playbook implements the following tasks:

    Configure ELK VM with Docker

           	      - name: Configure ELK VM with Docker
            	hosts: elk
                    remote_user: RedAdmin
                    become: true
                    tasks:             

    Install Docker.io

           	      - name: Install docker.io
            	apt:
                      update_cache: yes
                      force_apt_get: yes
                      name: docker.io
                      state: present

    Install Python3-pip

           	      - name: Install python3-pip
            	apt:
                      force_apt_get: yes
                      name: python3-pip
                      state: present

    Install Docker Python Module

           	      - name: Install python3-pip
            	apt:
                      force_apt_get: yes
                      name: python3-pip
                      state: present

    Increase virtual memory

           	      - name: Use more memory
            	sysctl:
                      name: vm.max_map_count
           		  value: 262144
          		  state: present
           		  reload: yes

    Download and Launch a Docker ELK Container with ports 5601, 9200, 5044.

           	      - name: Download and launch a docker elk container
             	docker_container:
               	  name: elk
               	  image: sebp/elk:761
               	  state: started
               	  restart_policy: always
     		  ports:
              	    - 5601:5601
              	    - 9200:9200
              	    - 5044:5044

    Enable Service Docker on Boot

           	      - name: Enable service docker on boot
            	sysmd:
                      name: docker
           		  enabled: yes

    After the ELK container is installed, SSH into your container ssh username@your.ELK-VM.External.IP and double check that elk-docker container is running.

       ssh RedAdmin@10.1.0.7

    The screenshot displays the results when successfully connected to ELK via SSH

    The following screenshot displays the result of running docker ps after successfully configuring the ELK instance.

    docker ps output

    Restrict access to the ELK VM using Azure network security groups.

    • You will need to add your public IP address to a whitelist. Opening virtual network existing NSG and create an incoming rule for your security group that allows TCP traffic port 5601 from your public IP address.

    Verify that you can access your server by navigating to http://[your.ELK-VM.External.IP]:5601/app/kibana. Use the public IP address of your new VM.

       http://20.242.105.231:5601/app/kibana

    You should see this page:

    If you can get on this page, congratulations! You have successfully created an ELK Server!


    Target Machines & Beats

    This ELK server is configured to monitor the following machines:

    • Web-1 VM: 10.0.0.5
    • Web-2 VM: 10.0.0.6
    • Web-3 VM: 10.0.0.7

    We have installed the following Beats on these machines:

    • Filebeat
    • Metricbeat

    These Beats allow us to collect the following information from each machine:

    Filebeat:

    • Filebeat monitors the specified log file or location, collects log events, and forwards them to Elasticsearch or Logstash for indexing.
    • Filebeat is used to collect and send log files.
    • Filebeat can be installed on almost any operating system, including Docker containers. It also contains internal modules for specific platforms such as Apache, MySQL, and Docker, including default configurations and Kibana objects for these platforms.

    Metricbeat:

    • Metricbeat helps monitor your server by collecting metrics and statistics that are collected and sent to the specific from the systems and services running on your server.
    • Like Filebeat, Metricbeat supports an internal module for collecting statistics from a particular platform.
    • You can use these modules and a subset called metric sets to configure how often Metricbeat collects metrics and the specific metrics it collects.
    • We use it for failed SSH login attempts, sudo escalations, and CPU/RAM statistics.
    Click here to view Steps on Creating Filebeat and Metricbeat.

    We will create two tools that will help our ELK monitoring server which are Filebeat and Metricbeat. Specifically we will:

    • Install Filebeat and Metricbeat on the Web VM’s
    • Create the Filebeat and Metricbeat Configuration File
    • Create a Filebeat and Metricbeat Installation Playbook
    • Verify Filebeat and Metricbeat is Installed

    Installing Filebeat and Metricbeat on DVWA Container

    1. Make sure that ELK container is running:

      • Navigate to Kibana: http://[your.ELK-VM.External.IP]:5601/app/kibana. Use public IP address of the ELK server that you created.

      • If Kibana is not up and running, open a terminal on your PC and SSH into ELK Server and start your ELK-docker.

        • Run docker container list -a
        • sudo docker start elk
    2. Use ELK’s server GUI to navigate and install Filebeat instructions for Linux.

      • Navigate to your ELK server’s IP:
        • Click on Add log data
        • Select System Logs
        • Click on DEB tab under Getting Started
    3. Using ELK’s server GUI to navigate and install Metricbeat instructions for Linux.

      • Naviate to your ELK’s server’s IP:
        • Click on ‘Add metric data`
        • Select Docker metrics
        • Click on DEB tab under Getting Started

    Create Filebeat and Metricbeat Configuration File

    1. We will create and edit the Filebeat and Metricbeat configuration file.

      • Start by opening a terminal and SSH into your Jump-box and start up the Ansible container.
      • Navigate to our Ansible container file and edit the Filebeat Configuration and Metricbeat Configuration.yml configuration files.
      • Username will be elastic and the password is changeme

    Scroll down to line #1106 and replace the IP address with the IP address of your ELK VM.

    	output.elasticsearch:
    	hosts: ["10.1.0.7:9200"]
    	username: "elastic"
    	password: "changeme"

    Scroll down to line #1806 and replace the IP address with the IP address of your ELK VM.

      	setup.kibana:
       	host: "10.1.0.7:5601"

    When finished save both files in /etc/ansible/files

    Creating Filebeat and Metricbeat Installation Playbook

    1. Create Filebeat and Metricbeat Playbooks and save it in /etc/ansible/roles directory.

    First, nano filebeat-playbook.yml with Filebeat template below:

    - name: installing and launching filebeat
      hosts: webservers
      become: yes
      tasks:
    
      - name: download filebeat deb
        command: curl -L -O curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.6.1-amd64.deb
    
      - name: install filebeat deb
        command: dpkg -i filebeat-7.6.1-amd64.deb
    
      - name: drop in filebeat.yml
        copy:
          src: /etc/ansible/files/filebeat-config.yml
          dest: /etc/filebeat/filebeat.yml
    
      - name: enable and configure system module
        command: filebeat modules enable system
    
      - name: setup filebeat
        command: filebeat setup
    
      - name: start filebeat service
        command: service filebeat start
    
      - name: enable service filebeat on boot
        systemd:
          name: filebeat
          enabled: yes

    Next, nano metricbeat-playbook.yml with Metricbeat template below:

    - name: Install metric beat
      hosts: webservers
      become: true
      tasks:
        # Use command module
      - name: Download metricbeat
        command: curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.6.1-amd64.deb
    
        # Use command module
      - name: install metricbeat
        command: dpkg -i metricbeat-7.6.1-amd64.deb
    
        # Use copy module
      - name: drop in metricbeat config
        copy:
          src: /etc/ansible/files/metricbeat-config.yml
          dest: /etc/metricbeat/metricbeat.yml
    
        # Use command module
      - name: enable and configure docker module for metric beat
        command: metricbeat modules enable docker
    
        # Use command module
      - name: setup metric beat
        command: metricbeat setup
    
        # Use command module
      - name: start metric beat
        command: service metricbeat start
    
        # Use systemd module
      - name: enable service metricbeat on boot
        systemd:
          name: metricbeat
          enabled: yes
    
    1. Run both playbooks to confirm that it works. ansible-playbook filebeat-playbook.yml and ansible-playbook metricbeat-playbook.yml

    This screenshot displays the results for filebeat-playbook:

    This screenshot displays the results for metricbeat-playbook:

    1. Verify that the playbook works by navigating to the Filebeat and Metricbeat installation page on the ELK Server GUI and under Step 5: Module Status and click on Check Data.

    The screenshot display the results of ELK stack successfully receiving logs.

    The screenshot display the results of ELK stack successfully receiving metrics.


    Using the Playbook

    In order to use the playbook, you will need to have an Ansible control node already configured. Assuming you have such a control node provisioned:

    SSH into the control node and follow the steps below:

    • Update the hosts file /etc/ansible/hosts to include ELK server IP 10.1.0.7

    ELK Host

    • Run the ELK, Filebeat and Metricbeat playbooks:
    	ansible-playbook install-elk.yml
    	ansible-playbook filebeat-playbook.yml
    	ansible-playbook metricbeat-playbook.yml
    
    • Navigate to http://[your.ELK-VM.External.IP]:5601/app/kibana to check that the installation worked as expected.
    Click here to view how to verify Elk Server is working with Filebeat and Metricbeat.

    We will verify ELK Server is working with Filebeat and Metricbeat by pulling logs and metrics from our web VM servers.

    Three tasks is implemented to test if the ELK server is working by pulling both logs and metrics from our web VM servers we create by:

    1. SSH Barrage: Generating a high amount of failed SSH login attempts.

    • Run ssh username@ip.of.web.vm
    • An error should occur as shown in the screenshot below:

    • Write a script that creates 1000 login attempts on the webserver 10.0.0.5.
       for i in {1..1000};
       do
        ssh sysadmin@10.0.0.5;
       done;
    • Write a script that creates a nested loop that generates SSH login attempts across all 3 of your web-servers VM’s.
       while true;
       do
        for i in {5..7};
         do
          ssh sysadmin@10.0.0.$i;
         done;
       done

    The screenshot display the results of Kibana logs when running the scripts.

    2. Linux Stress: Generating a high amount of CPU usage on VM servers to verify that Kibana picks up data.

    • While in Jump-box go inside the container and login to your web server VM.
       $sudo docker container list -a 
       $sudo docker start [CONTAINER NAME]
       $sudo docker attach [CONTAINER NAME]
    • SSH into your web VM: ssh username@web.ip.vm
    • Run command: sudo apt install stress which installs a stress program.
    • Run command: sudo stress --cpu 1 which allows stress to run for a minute.
    • View metrics on Kibana which will show CPU usage on screenshot display below:

    3. wget-DoS: Generating a high amount of web requests to our VM servers to make sure that Kibana picks up data.

    • Log into Jump-Box VM and run command wget ip.of.web.vm: you will receive an index.html file downloaded from your web VM to your jump-box.
    • Write a loop script that will create 1000 web requests on the 10.0.0.5 server and downloaded files onto your jump-box.
       for i in {1..1000};
       do
        wget 10.0.0.5;
       done;
    • View metrics on Kibana which will show the Load, Memory Usage, and Network Traffic on screenshot display below:


    As a Bonus, provide the specific commands the user will need to run to download the playbook, update the files, etc.

    Commands Explanation
    ssh username@[Jump.box.IP] Connect to Jump-Box VM
    ssh-keygen Generates a public SSH key to access (Needed to set up VM)
    cat ~./ssh/id_rsa.pub Read the SSH keygen
    docker ps Docker command to list running containers
    docker start [CONTAINER] Start a container
    docker attach [CONTAINER] Attaches to a running container
    docker stop [CONTAINER] Stop a running container
    cd /etc/ansible Change directory to /etc/ansible
    nano /etc/ansible/hosts Edit hosts file
    nano /etc/ansible/ansible.cfg Edit ansible configuration file
    nano filebeat-config.yml Edit Filebeat configuration yml file
    nano filebeat-playbook.yml Edit Filebeat playbook yml file
    nano metricbeat-config.yml Edit Metricbeat configuration yml file
    nano metricbeat-playbook.yml Edit Metricbeat playbook yml file
    ansible-playbook [location][filename.yml] Execute ansible playbook
    curl [options/URL] Client URL: Enables data transfer over various network protocols
    dpkg -i [package-file] Package manager for Debian: -i: installing package file
    exit Cause the shell to exit

    Resources

    Visit original content creator repository
  • FalsoJNI

    FalsoJNI

    Setup
    Implementing Methods
    Implementing Fields
    Tips
    TODO
    Credits
    License

    FalsoJNI (falso as in fake from Italian) is a simple, zero-dependency fake
    JVM/JNI interface written in C.

    It is created mainly to make JNI-heavy Android→PSVita ports easier, but probably
    could be used for other purposes as well.

    Setup

    Since there are no dependencies, FalsoJNI is not supplied with a Makefile of its
    own, so to get started just include in your own Makefile/CMakeLists.txt
    all the source files:

    FalsoJNI/FalsoJni.c
    FalsoJNI/FalsoJni_ImplBridge.c
    FalsoJNI/FalsoJni_Logger.c
    FalsoJNI/converter.c
    

    Second thing you need to do, is to create your own FalsoJNI_Impl file. You
    will use it later to provide implementations for custom JNI Methods (the
    ones called with jni->CallVoidMethodV and similars) and Fields.

    To do this, from FalsoJNI_ImplSample.h copy the definitions between
    COPY STARTING FROM HERE! and COPY UP TO HERE! to your project in any .c
    file (you could also split it up into several files if you need to).

    After that, you already init FalsoJNI and supply JNIEnv and JavaVM objects
    to your client application, like this:

    #include "FalsoJNI.h"
    
    int main() {
        // ...
        
        jni_init(); // Initializes jvm and jni objects
    
        int (*JNI_OnLoad)(JavaVM* jvm) = (void*)so_symbol(&so_mod,"JNI_OnLoad");
        JNI_OnLoad(&jvm);
    
        // ...
    }

    That’s it for the basic setup. In a theoretical situation where your client
    application doesn’t use any Methods or Fields, you’re done here.
    Otherwise, read on.

    Implementing Methods

    Step 1. Create functions

    The easiest way to figure out which methods you need to implement is to
    run the app as-is and look for FalsoJNI’s errors in logs, particularly with
    GetMethodID / GetStaticMethodID functions:

    [ERROR][/tmp/soloader/FalsoJNI.c:295][GetMethodID] [JNI] GetMethodID(env, 0x83561570, "SetShiftEnabled", "(Z)V"): not found
    [ERROR][/tmp/soloader/FalsoJNI.c:295][GetMethodID] [JNI] GetMethodID(env, 0x83561570, "Shutdown", "()V"): not found
    

    Two important things you get from this log are the method name
    ("SetShiftEnabled") and the method signature ("(Z)V").

    You can learn what each symbol in Java type signature means here.

    To cut on the long details, here are a few self-explanatory examples of how
    Java method signatures are translated into FalsoJNI-compatible implementations:

    // FalsoJNI always passes arguments as a va_list to be able to make single
    // function implementation no matter how it is called (i.e. CallMethod,
    // CallMethodV, or CallMethodA ).
    
    // "SetShiftEnabled", "(Z)V"
    void SetShiftEnabled(jmethodID id, va_list args) { // V (ret type) is a void
        jboolean arg = va_arg(args, jboolean); // Z is a boolean
        // do something
    }
    
    // "GetDisplayOrientationLock", "()I"
    jint GetDisplayOrientationLock(jmethodID id, va_list args) { // I (ret type) is an integer
        // no arguments here
        return 0;
    }
    
    // "GetUsername", "(Ljava/lang/String;)Ljava/lang/String;"
    jstring GetUsername(jmethodID id, va_list args) { // Ljava/lang/String; (ret type) is a jstring
        jstring _email = va_arg(args, jstring);
        
        // If you want to work with Java strings, always use respective JNI methods!
        // They are NOT c-strings.
        const char * email = jni->GetStringUTFChars(&jni, _email, NULL);
        const char * username = MyCoolFunctionToLookupUsername(_email);
        jni->ReleaseStringUTFChars(&jni, _email, email);
        
        return jni->NewStringUTF(&jni, username);
    }
    
    // "read", "([BII)I"
    jint InputStream_read(jmethodID id, va_list args) { // I (ret type) is an integer
        jbyteArray _b = va_arg(args, char*); // [B is a byte array.
        jint off = va_arg(args, int); // I is an int
        jint len = va_arg(args, int); // I is an int
    
        // Before accessing/changing the array elements, we have to do the following:
        JavaDynArray * jda = (JavaDynArray *) _b;
        if (!jda) {
            log_error("[java.io.InputStream.read()] Provided buffer is not a valid JDA.");
            return 0;
        }
    
        char * b = jda->array; // Now this array we can work with
    }

    Pay great attention to the last example. Java arrays are notably different
    from C arrays by always having the array size information with them, so
    FalsoJNI mimics Java arrays behavior with a special struct, JavaDynArray
    (or jda in short).

    Every time you receive an array of any kind as an argument, you have to get
    the “real”, underlying array from it like shown in the example. You can also
    use jda_sizeof(JavaDynArr *) function to get the length of the array you
    are operating on.

    If you need to return an array in Java method implementation, — likewise.
    Work with jda->array, return jda.

    Also notice the second-to-last example to see how you can work with Java strings.

    Step 2. Put them in relevant arrays

    Now that you have your implementations in place, the only thing left to do
    to allow the client application to use them is to fill in the arrays in the
    implementation file you copied from FalsoJNI_ImplSample.h earlier.

    You just need to figure out the return types for your methods and come up with
    any (unique!) method IDs you like. Example of filling the arrays for methods
    from Step 1:

    NameToMethodID nameToMethodId[] = {
        { 100, "SetShiftEnabled", METHOD_TYPE_VOID },
        { 101, "GetDisplayOrientationLock", METHOD_TYPE_INT },
        { 102, "read", METHOD_TYPE_INT },
    };
    
    MethodsVoid methodsVoid[] = {
        { 100, SetShiftEnabled }
    };
    
    MethodsInt methodsInt[] = {
        { 101, GetDisplayOrientationLock },
        { 102, InputStream_read }
    };

    Implementing Fields

    With Fields, it’s basically the same thing. Run your app, look for the errors
    in GetFieldID, GetStaticFieldID to figure out the needed Fields names and
    signatures (well, just types in this case).

    When you know them, fill in the arrays in the same fashion:

    NameToFieldID nameToFieldId[] = {
        { 8, "screenWidth",            FIELD_TYPE_INT },
        { 9, "screenHeight",           FIELD_TYPE_INT },
        { 10, "is_licensed",           FIELD_TYPE_BOOLEAN }
    };
    
    FieldsBoolean fieldsBoolean[] = {
        { 10, JNI_TRUE }
    };
    
    FieldsInt fieldsInt[] = {
        { 8, 960 },
        { 9, 544 },
    };

    Everything else will be taken care of by FalsoJNI.

    Tips

    1. There is a very verbose logging in this lib to debug difficult situations.
      Either define FALSOJNI_DEBUGLEVEL or edit FalsoJNI.h if you need to change
      the verbosity level:

    #define FALSOJNI_DEBUG_NO    4
    #define FALSOJNI_DEBUG_ERROR 3
    #define FALSOJNI_DEBUG_WARN  2
    #define FALSOJNI_DEBUG_INFO  1
    #define FALSOJNI_DEBUG_ALL   0
    
    #ifndef FALSOJNI_DEBUGLEVEL
    #define FALSOJNI_DEBUGLEVEL FALSOJNI_DEBUG_WARN
    #endif
    1. There are things in JNI that can not be implemented without some terrible
      overengineering. If you come across one of them, the library will throw
      a warning-level log at you.

    2. I tried to keep the code as clean and self-explanatory as possible, but
      didn’t have time yet to write a proper documentation. As a direction for
      further info, look at FalsoJNI_ImplBridge.h header for common type definitions
      and JDA functions.

    3. Oracle JNI spec is your friend.

    TODO

    1. Exception handling. They are completely ignored now.
    2. GetArrayLength for ObjectField values. (if needed?)
    3. MonitorEnter/MonitorExit (per-javaobject semaphores).
    4. DirectByteBuffers.
    5. Keep track of references and destroy objects when there aren’t any left.
    6. Dry Run mode that would record methods/fields definitions to
      FalsoJNI_Impl.c for you.

    Credits

    • TheFloW and Rinnegatamante for fake JNI interfaces implementations
      in gtasa_vita that
      served as inspiration and basis for this lib.

    License

    This software may be modified and distributed under the terms of
    the MIT license. See the LICENSE file for details.

    Contains parts of Dalvik implementation of JNI interfaces,
    copyright (C) 2008 The Android Open Source Project,
    licensed under the Apache License, Version 2.0.

    Includes converter.c and converter.h,
    copyright (C) 2015 Jonathan Bennett jon@autoitscript.com,
    licensed under the Apache License, Version 2.0.

    Visit original content creator repository

  • BreakGlass

    BreakGlass

    The Highly Configurable Temporary GCP Privilege Escalation Tool

    What is BreakGlass?

    BreakGlass is a tool that allows developers to temporarily escalate their own GCP permissions at any time. This is like the sudo command for GCP permissions. Developers will be able to fix things at 3 AM without waking up the teams RP.

    UI UI2 Bot

    How it works

    1. Sign into the app with your GCP credentials
    2. Select a project
    3. Select the permissions you need
    4. Select a timeframe
    5. Provide your reasoning for breaking the glass
    6. Your permissions will be provided and the event will be logged

    Getting Started

    1. $ git clone https://github.com/Stillerman/BreakGlass

    2. Create a new GCP project that will house the BreakGlass server. gcloud projects create breakglass-{UniqueID} --name="BreakGlass"

      Make sure unique ID is a company-specific identifier because the projectID must be unique across all projects on google cloud.

      Set that project to default with gcloud config set project breakglass-{UniqueId}

    3. Create a service account

      gcloud iam service-accounts create sa-breakglass \
      --description="BreakGlass service account" \
      --display-name="sa-breakglass-disp"
      --project=breakglass-{UniqueID from above}

      You will now be able to see the account with

      gcloud iam service-accounts list
      

      It will be something like sa-breakglass@breakglass-{uniqueID}.iam.gserviceaccount.com Download the key.json file with the following command (be sure you are in the root of the directory you cloned)

      gcloud iam service-accounts keys create ./key.json \
          --iam-account {The service account you created above}
      

      Sign in by running the following

      gcloud auth activate-service-account {service account} --key-file=key.json
      
    4. Grant Permissions

      Enable the Cloud Resource Manager API Here Note be sure that this is for the project Breakglass!

      Next, grant sa-breakglass folder admin in all of the folders that you would like users to have the change to escalate in. Breakglass will only be able to see and update projects where it is the folder admin.

      After permissions are configured, run

      gcloud projects list
      

      and make sure you can see the projects you want breakglass to have access to. Note It might take 3-5 minutes for the permissions to update and the projects to be visible.

    5. Add OAuth to breakglass project

      Go to the cloud console, select the breakglass project and then navigate to APIs & Services -> Credentials. Click “Create Credentials” -> Oauth Client ID -> Configure Consent Screen -> Internal then provide a display name (probably breakglass) -> Save

      Now go back to credentials -> Create Credentials -> OAuth Client Id -> Application type: Web Application

      Here, you name the key (name doesn’t matter) and you also add “Authorized JavaScript Origins”. Add just “http://localhost:8080” for now, we will come back to this later.

      Click create and copy the client ID for later. You won’t be needed the secret.

    6. Configure Breakglass

      Copy K8s/breakglass-configmap.yaml.example to K8s/breakglass-configmap.yaml and configure it to your needs. Read about possible configurations here.

      Note you will need the OAuth Client Id from the previous step.

    7. Build the project

      Build the docker image in the minikube context with

      yarn k8s
      

      Configure Kubernetes Project with

      minkube start
      kubectl apply -f K8s
      

      Now the project will be running, but you have not whitelisted the port on the OAuth, so it will not work as is. Ensure everything is working properly by forwarding the port to the pod

      kubectl port-forward {Naem od pod that was created} 8080:8080
      

      Now navigate to http://localhost:8080

    8. Done.

    Visit original content creator repository
  • aioodbc

    aioodbc

    GitHub Actions status for master branch Chat on Gitter

    aioodbc is a Python 3.7+ module that makes it possible to access ODBC databases with asyncio. It relies on the awesome pyodbc library and preserves the same look and feel. Internally aioodbc employs threads to avoid blocking the event loop, threads are not that as bad as you think!. Other drivers like motor use the same approach.

    aioodbc is fully compatible and tested with uvloop. Take a look at the test suite, all tests are executed with both the default event loop and uvloop.

    Basic Example

    aioodbc is based on pyodbc and provides the same api, you just need to use yield from conn.f() or await conn.f() instead of conn.f()

    Properties are unchanged, so conn.prop is correct as well as conn.prop = val.

    import asyncio
    
    import aioodbc
    
    
    async def test_example():
        dsn = "Driver=SQLite;Database=sqlite.db"
        conn = await aioodbc.connect(dsn=dsn)
    
        cur = await conn.cursor()
        await cur.execute("SELECT 42 AS age;")
        rows = await cur.fetchall()
        print(rows)
        print(rows[0])
        print(rows[0].age)
        await cur.close()
        await conn.close()
    
    
    asyncio.run(test_example())

    Connection Pool

    Connection pooling is ported from aiopg and relies on PEP492 features:

    import asyncio
    
    import aioodbc
    
    
    async def test_pool():
        dsn = "Driver=SQLite3;Database=sqlite.db"
        pool = await aioodbc.create_pool(dsn=dsn)
    
        async with pool.acquire() as conn:
            cur = await conn.cursor()
            await cur.execute("SELECT 42;")
            r = await cur.fetchall()
            print(r)
            await cur.close()
            await conn.close()
        pool.close()
        await pool.wait_closed()
    
    
    asyncio.run(test_pool())

    Context Managers

    Pool, Connection and Cursor objects support the context management protocol:

    import asyncio
    
    import aioodbc
    
    
    async def test_example():
        dsn = "Driver=SQLite;Database=sqlite.db"
    
        async with aioodbc.create_pool(dsn=dsn) as pool:
            async with pool.acquire() as conn:
                async with conn.cursor() as cur:
                    await cur.execute("SELECT 42 AS age;")
                    val = await cur.fetchone()
                    print(val)
                    print(val.age)
    
    
    asyncio.run(test_example())

    Installation

    In a linux environment pyodbc (hence aioodbc) requires the unixODBC library. You can install it using your package manager, for example:

    $ sudo apt-get install unixodbc
    $ sudo apt-get install unixodbc-dev
    

    Then:

    pip install aioodbc
    

    Run tests

    To run tests locally without docker, install unixodbc and sqlite driver:

    $ sudo apt-get install unixodbc
    $ sudo apt-get install libsqliteodbc
    

    Create virtualenv and install package with requirements:

    $ pip install -r requirements-dev.txt
    

    Run tests, lints etc:

    $ make fmt
    $ make lint
    $ make test
    

    Other SQL Drivers

    • aiopg – asyncio client for PostgreSQL
    • aiomysql – asyncio client form MySQL

    Requirements

    Visit original content creator repository
  • FB-BEV

    Forward-Backward View Transformation for Vision-Centric AV Perception

    FB-BEV and FB-OCC are a family of vision-centric 3D object detection and occupancy prediction methods based on forward-backward view transformation.

    News

    • [2023/8/01] FB-BEV was accepted to ICCV 2023.
    • 🏆 [2023/6/16] FB-OCC wins both Outstanding Champion and Innovation Award in Autonomous Driving Challenge in conjunction with CVPR 2023 End-to-End Autonomous Driving Workshop and Vision-Centric Autonomous Driving Workshop.

    Getting Started

    Model Zoo

    Backbone Method Lr Schd IoU Config Download
    R50 FB-OCC 20ep 39.1 config model

    • More model weights will be released later.

    License

    Copyright © 2022 – 2023, NVIDIA Corporation. All rights reserved.

    This work is made available under the Nvidia Source Code License-NC. Click here to view a copy of this license.

    The pre-trained models are shared under CC-BY-NC-SA-4.0. If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.

    For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing.

    Citation

    If this work is helpful for your research, please consider citing:

    @inproceedings{li2023fbbev,
      title={{FB-BEV}: {BEV} Representation from Forward-Backward View Transformations},
      author={Li, Zhiqi and Yu, Zhiding and Wang, Wenhai and Anandkumar, Anima and Lu, Tong and Alvarez, Jose M},
      booktitle={IEEE/CVF International Conference on Computer Vision (ICCV)},
      year={2023}
    }
    

    @article{li2023fbocc,
      title={{FB-OCC}: {3D} Occupancy Prediction based on Forward-Backward View Transformation},
      author={Li, Zhiqi and Yu, Zhiding and Austin, David and Fang, Mingsheng and Lan, Shiyi and Kautz, Jan and Alvarez, Jose M},
      journal={arXiv:2307.01492},
      year={2023}
    }
    

    Acknowledgement

    Many thanks to these excellent open source projects:

    Visit original content creator repository