This is a basic example which shows you how to query the data for the GOA or AI:
library(swo)
yrs = 2017 # min year to consider
species = c(10110, 21740) # AFSC species codes
region = 'GOA'
afsc_user = 'your_afsc_username'
afsc_pwd = 'your_afsc_pwd'
# query the database, results are stored in 4 files in the "data" folder
query_data(region, species, yrs, afsc_user, afsc_pwd)
# bring data into the global environment
cpue <- vroom::vroom(here::here('data', 'cpue_goa.csv'))
lfreq <- vroom::vroom(here::here('data', 'lfreq_goa.csv'))
strata <- vroom::vroom(here::here('data', 'strata_goa.csv'))
specimen <- vroom::vroom(here::here('data', 'specimen_goa.csv'))
Since the query will not work unless one is connect to the AFSC (and has associated permissions) some example data have been provided.
library(swo)
# get data
data(example_cpue) # numerical CPUE by species/year/haul/strata
#> Warning in data(example_cpue): data set 'example_cpue' not found
data(example_lfreq) # lengthed fish sample data
#> Warning in data(example_lfreq): data set 'example_lfreq' not found
data(example_specimen) # aged fish sample data
#> Warning in data(example_specimen): data set 'example_specimen' not found
data(example_strata) # survey strata area sizes km^2
#> Warning in data(example_strata): data set 'example_strata' not found
In the following example a simple example of two replicates is run
that incorporates: bootstrap haul, length, and age variability, and
reduces the length sample size to 100 per haul. The results are saved in
nested folders output/goa
with file names
example_comp_age.csv
etc.
swo_sim(iters = 2, example_lfreq, example_specimen, example_cpue, example_strata, boot_hauls = TRUE,
boot_lengths = TRUE, boot_ages = TRUE, length_samples = 100, save = 'example', region = 'goa')