Skip to content

Commit

Permalink
[DATALAD RUNCMD] run codespell throughout fixing typo automagically
Browse files Browse the repository at this point in the history
it would find/fix some typos which are new to codespell 2.2.6 and also some in
.dotfiles which were not tested before.

=== Do not change lines below ===
{
 "chain": [],
 "cmd": "codespell -w",
 "exit": 0,
 "extra_inputs": [],
 "inputs": [],
 "outputs": [],
 "pwd": "."
}
^^^ Do not change lines above ^^^
  • Loading branch information
yarikoptic committed Nov 3, 2023
1 parent 26c6ba7 commit a175662
Show file tree
Hide file tree
Showing 16 changed files with 20 additions and 20 deletions.
2 changes: 1 addition & 1 deletion .appveyor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@
# - Workers have vim installed for convenient text editing in the command shell


# do not make repository clone cheap: interfers with versioneer
# do not make repository clone cheap: interferes with versioneer
shallow_clone: false


Expand Down
2 changes: 1 addition & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,7 @@ before_install:
- travis_retry sudo apt-get install eatmydata # to speedup some installations
- tools/ci/prep-travis-forssh.sh
- tools/ci/debians_disable_outdated_ssl_cert
# Install various basic depedencies
# Install various basic dependencies
- travis_retry sudo eatmydata apt-get install zip pandoc p7zip-full
# needed for tests of patool compression fall-back solution
- travis_retry sudo eatmydata apt-get install xz-utils
Expand Down
2 changes: 1 addition & 1 deletion datalad/core/distributed/tests/test_clone.py
Original file line number Diff line number Diff line change
Expand Up @@ -1727,7 +1727,7 @@ def test_url_mapping_specs():
(_windows_map,
r'C:\Users\datalad\from',
r'D:\to'),
# test standard github mapping, no pathc needed
# test standard github mapping, no patch needed
({},
'https://github.com/datalad/testrepo_gh/sub _1',
'https://github.com/datalad/testrepo_gh-sub__1'),
Expand Down
2 changes: 1 addition & 1 deletion datalad/core/local/create.py
Original file line number Diff line number Diff line change
Expand Up @@ -367,7 +367,7 @@ def __call__(

# Note for the code below:
# OPT: be "smart" and avoid re-resolving .repo -- expensive in DataLad
# Re-use tbrepo instance, do not use tbds.repo
# Reuse tbrepo instance, do not use tbds.repo

# create and configure desired repository
# also provides initial set of content to be tracked with git (not annex)
Expand Down
4 changes: 2 additions & 2 deletions datalad/core/local/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -665,7 +665,7 @@ def _format_iospecs(specs, **kwargs):
a kwargs-key (minus the brace chars), whose value is a list.
In this case the entire specification list is substituted for
the list in kwargs, which is returned as such. This enables
the replace/re-use sequences, e.g. --inputs '{outputs}'
the replace/reuse sequences, e.g. --inputs '{outputs}'
Parameters
----------
Expand Down Expand Up @@ -844,7 +844,7 @@ def run_command(cmd, dataset=None, inputs=None, outputs=None, expand=None,
can be used by callers that already performed analog verififcations
to avoid duplicate processing.
yield_expanded : {'inputs', 'outputs', 'both'}, optional
Include a 'expanded_%s' item into the run result with the exanded list
Include a 'expanded_%s' item into the run result with the expanded list
of paths matching the inputs and/or outputs specification,
respectively.
Expand Down
4 changes: 2 additions & 2 deletions datalad/distributed/ora_remote.py
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,7 @@ def get_7z(self):

runner = WitlessRunner()
# TODO: To not rely on availability in PATH we might want to use `which`
# (`where` on windows) and get the actual path to 7z to re-use in
# (`where` on windows) and get the actual path to 7z to reuse in
# in_archive() and get().
# Note: `command -v XXX` or `type` might be cross-platform
# solution!
Expand Down Expand Up @@ -681,7 +681,7 @@ def write_file(self, file_path, content, mode='w'):

def get_7z(self):
# TODO: To not rely on availability in PATH we might want to use `which`
# (`where` on windows) and get the actual path to 7z to re-use in
# (`where` on windows) and get the actual path to 7z to reuse in
# in_archive() and get().
# Note: `command -v XXX` or `type` might be cross-platform
# solution!
Expand Down
4 changes: 2 additions & 2 deletions datalad/distribution/siblings.py
Original file line number Diff line number Diff line change
Expand Up @@ -515,7 +515,7 @@ def _configure_remote(
annex_required = _inherit_annex_var(
delayed_super, name, 'required')
if annex_group is None:
# I think it might be worth inheritting group regardless what
# I think it might be worth inheriting group regardless what
# value is
#if annex_wanted in {'groupwanted', 'standard'}:
annex_group = _inherit_annex_var(
Expand Down Expand Up @@ -577,7 +577,7 @@ def _configure_remote(

if as_common_datasrc:
# we need a fully configured remote here
# do not re-use `url`, but ask for the remote config
# do not reuse `url`, but ask for the remote config
# that git-annex will use too
remote_url = repo.config.get(f'remote.{name}.url')
ri = RI(remote_url)
Expand Down
2 changes: 1 addition & 1 deletion datalad/interface/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ def discover_dataset_trace_to_targets(basepath, targetpaths, current_trace,
set() if includeds is None else set(includeds)
# this beast walks the directory tree from a given `basepath` until
# it discovers any of the given `targetpaths`
# if it finds one, it commits any accummulated trace of visited
# if it finds one, it commits any accumulated trace of visited
# datasets on this edge to the spec
valid_repo = GitRepo.is_valid_repo(basepath)
if valid_repo:
Expand Down
2 changes: 1 addition & 1 deletion datalad/local/configuration.py
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ def __call__(
raise ValueError(
'Scope selection is not supported for dumping')

# normalize variable specificatons
# normalize variable specifications
specs = []
for s in ensure_list(spec):
if isinstance(s, tuple):
Expand Down
2 changes: 1 addition & 1 deletion datalad/support/entrypoints.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ def iter_entrypoints(group, load=False):
Yields
-------
(name, module, loade(r|d))
(name, module, load(r|d))
The first item in each yielded tuple is the entry point name (str).
The second is the name of the module that contains the entry point
(str). The type of the third items depends on the load parameter.
Expand Down
2 changes: 1 addition & 1 deletion datalad/support/gitrepo.py
Original file line number Diff line number Diff line change
Expand Up @@ -1470,7 +1470,7 @@ def commit(self, msg: Optional[str] = None,
if '--amend' in options:
if '--no-edit' not in options:
# don't overwrite old commit message with our default
# message by default, but re-use old one. In other words:
# message by default, but reuse old one. In other words:
# Make --no-edit the default:
options += ["--no-edit"]
else:
Expand Down
2 changes: 1 addition & 1 deletion datalad/support/parallel.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ class ProducerConsumer:
- `producer` must produce unique entries. AssertionError might be raised if
the same entry is to be consumed.
- `consumer` can add to the queue of items produced by producer via
`.add_to_producer_queue`. This allows for continuous re-use of the same
`.add_to_producer_queue`. This allows for continuous reuse of the same
instance in recursive operations (see `get` use of ProducerConsumer).
- if producer or consumer raise an exception, we will try to "fail gracefully",
unless subsequent Ctrl-C is pressed, we will let already running jobs to
Expand Down
4 changes: 2 additions & 2 deletions datalad/tests/test_constraints.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,14 +49,14 @@ def test_bool():
# this should always work
assert_equal(c(True), True)
assert_equal(c(False), False)
# all that resuls in True
# all that results in True
assert_equal(c('True'), True)
assert_equal(c('true'), True)
assert_equal(c('1'), True)
assert_equal(c('yes'), True)
assert_equal(c('on'), True)
assert_equal(c('enable'), True)
# all that resuls in False
# all that results in False
assert_equal(c('false'), False)
assert_equal(c('False'), False)
assert_equal(c('0'), False)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/design/progress_reporting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ previously known value (if `increment=True`):
log_progress(
lgr.info,
# must match the identier used to start the progress reporting
# must match the identifier used to start the progress reporting
identifier,
# arbitrary message content, string expansion supported just like
# regular log messages
Expand Down
2 changes: 1 addition & 1 deletion docs/source/design/threaded_runner.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ Main Thread

There is a single queue, the ``output_queue``, on which the main thread waits, after all transport threads, and the process waiting thread are started. The ``output_queue`` is the signaling queue and the output queue of the stderr-thread and the stdout-thread. It is also the signaling queue of the stdin-thread, and it is the signaling queue for the process waiting threads.

The main thread waits on the ``output_queue`` for data or signals and handles them accordingly, i.e. calls data callbacks of the protocol if data arrives, and calls connection-related callbacks of the protocol if other signals arrive. If no messages arrive on the ``output_queue``, the main thread blocks for 100ms. If it is unblocked, either by getting a message or due to elapsing of the 100ms, it will process timeouts. If the ``timeout``-parameter to the constructor was not ``None``, it will check the last time any of the monitored files (stdout and/or stderr) yielded data. If the time is larger than the specified timeout, it will call the ``tiemout`` method of the protocol instance. Due to this implementation, the resolution for timeouts is 100ms. The main thread handles the closing of ``stdin``-, ``stdout``-, and ``stderr``-file descriptors if all other threads have terminated and if ``output_queue`` is empty. These tasks are either performed in the method ``ThreadedRunner.run()`` or in a result generator that is returned by ``ThreadedRunner.run()`` whenever ``send()`` is called on it.
The main thread waits on the ``output_queue`` for data or signals and handles them accordingly, i.e. calls data callbacks of the protocol if data arrives, and calls connection-related callbacks of the protocol if other signals arrive. If no messages arrive on the ``output_queue``, the main thread blocks for 100ms. If it is unblocked, either by getting a message or due to elapsing of the 100ms, it will process timeouts. If the ``timeout``-parameter to the constructor was not ``None``, it will check the last time any of the monitored files (stdout and/or stderr) yielded data. If the time is larger than the specified timeout, it will call the ``timeout`` method of the protocol instance. Due to this implementation, the resolution for timeouts is 100ms. The main thread handles the closing of ``stdin``-, ``stdout``-, and ``stderr``-file descriptors if all other threads have terminated and if ``output_queue`` is empty. These tasks are either performed in the method ``ThreadedRunner.run()`` or in a result generator that is returned by ``ThreadedRunner.run()`` whenever ``send()`` is called on it.


Protocols
Expand Down
2 changes: 1 addition & 1 deletion docs/source/publications.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ YODA: YODA's Organigram on Data Analysis [poster]
- F1000Research 2018, 7:1965 (https://doi.org/10.7490/f1000research.1116363.1)

Go FAIR with DataLad [talk]
- On DataLad's capabilities to create and maintain Findable, Accessible, Interoperable, and Re-Usable (FAIR)
- On DataLad's capabilities to create and maintain Findable, Accessible, Interoperable, and reusable (FAIR)
resources.
- Michael Hanke, Yaroslav O. Halchenko
- Bernstein Conference 2018 workshop: Practical approaches to research data management and reproducibility
Expand Down

0 comments on commit a175662

Please sign in to comment.