text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Mac OS 可执行文件
Body: <img width="247" alt="image" src="https://github.com/JoeanAmier/XHS-Downloader/assets/130121308/f460e5a1-e9ef-4aed-9c13-c44c1300bc27">

intel 系列使用正常,M 系列的朋友可以试下,第一次打开有点慢请稍等下
[XHS-Downloader_V1.7_MAC.zip](https://github.com/JoeanAmier/XHS-Downloader/files/13838186/XHS-Downloader_V1.7_MAC.zip)
| 0easy
|
Title: [INF] Allow `import_message()` to be Python distribution flexible
Body: # Brief Description
<!-- Please provide a brief description of what you'd like to propose. -->
Currently, if a user attempts to user a feature of an optional external package (`rdkit`, `biopython`, `unyt`, `pyspark`) which is not installed, the user receives an error that directs them on how to install it. The error message is displayed by `import_message()` which passes instructions on how to install it. Ex:
```
To use the janitor submodule spark, you need to install pyspark.
To do so, use the following command:
conda install -c conda-forge pyspark
```
With the exception of `rdkit`, I think all of these packages are `pip` installable. It would be nice if this message could decide whether to provide `conda` vs `pip` instructions to the user. Or tell them that the package can only be installed with `conda`.
This is how the function is currently called:
```python
import_message(submodule="spark", package="pyspark",
installation="conda install -c conda-forge pyspark")
```
Not all `conda` installs will use the same channel. One option is to provide both `conda` and `pip` instructions as arguments in the call, and it figures out which to send to the user. If either are `None`, then it is understood to be `pip` or `conda` only.
# Example API
One verbose option would be to extend what currently exists:
```python
import_message(submodule="spark", package="pyspark",
conda_installation="conda install -c conda-forge pyspark",
pip_installation="pip install pyspark")
```
A more succinct version could be:
```python
import_message(submodule="spark", package="pyspark",
conda_channel="conda-forge", pip_install=True)
```
which would use the provided `package` argument, and `conda_channel` could be `None` if it doesn't exist on `conda`. | 0easy
|
Title: Add SHAP explanations to Neural Network
Body: | 0easy
|
Title: RFC: General mixture distribution
Body: A quick search for mixture distributions in numpyro only turns up examples using `Categorical` in conjunction with an array of distributions. Since sampling from discrete distributions is not always desirable, I have implemented a quick general purpose mixture distribution with continuous log probability.
```python
class Mixture(Distribution):
arg_constraints = {}
def __init__(self, distributions, weights, validate_args=None):
self.distributions = distributions
self.weights = weights
self.log_weights = jnp.log(weights)
self.choices = dist.Categorical(probs=weights)
# ensure all child distributions have the same event and batch shape
assert len(set(d.batch_shape for d in distributions)) == 1
assert len(set(d.event_shape for d in distributions)) == 1
super().__init__(
distributions[0].batch_shape,
distributions[0].event_shape,
validate_args
)
@property
def support(self):
return self.distributions[0].support
def sample(self, key: PRNGKey, sample_shape=()):
choice_key, sample_key = jax.random.split(key)
choice_keys = jax.random.split(choice_key, np.prod(sample_shape))
choices = self.choices.sample(key, (np.prod(sample_shape),))
samples = jnp.array([
self.distributions[n].sample(k) for n, k in zip(choices, choice_keys)
])
return samples.reshape(sample_shape + self.batch_shape + self.event_shape)
def log_prob(self, value):
log_probs = jnp.array([d.log_prob(value) for d in self.distributions])
# have to flip axes here to make arrays broadcast correctly
return logsumexp(log_probs.T + self.log_weights, axis=-1).T
```
I'd be curious whether this distribution would be of use to other numpyro users/maintainers, in which case I'd be very happy to draft a PR. Any pointers to potential improvements or issues would also be appreciated in the meantime. | 0easy
|
Title: Add support for more operators
Body: See full list at https://sqlite.org/vtab.html#xbestindex:
- [x] `IS NULL`
- [x] `IS NOT NULL`
- [x] `!=`
- [x] `LIKE` | 0easy
|
Title: Feature: Search substring in env variable completer
Body: ```xsh
$TRA<Tab>
# Show all variables with `*TRA*`
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Add a logger and verbosity options
Body: So after https://github.com/python-security/pyt/pull/99 is merged, we will be be pretty accurate, accurate enough for us to shift our focus towards making the tool more resilient and able to run on lots and lots of repos.
PyT needs to be more user-friendly by letting the user know e.g. which files it is looking at, and also more resilient, logging will help with fixing issues like RecursionDepth errors etc.
Here is a logging PR that is pretty good we can take things from https://github.com/Yelp/detect-secrets/pull/46 | 0easy
|
Title: Feature request: JSON viewer/editor element
Body: ### Description
Given a valid JSON, the JSON viewer/editor element renders it nicely (highlighting keys, breaking long lines, copy raw functionality, etc).
### Suggested solution
None.
### Alternative
_No response_
### Additional context
_No response_ | 0easy
|
Title: Clarify that the reported allocations are at the high water mark
Body: This should probably be documented both in the documentation and in the report that gets printed out. | 0easy
|
Title: [Bug] Testing new Llama-3_3-Nemotron-Super-49B-v1 by Nvidia: "Model architectures ['DeciLMForCausalLM'] are not supported for now."
Body: ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [ ] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [ ] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
I tried to run on SGLang Llama-3_3-Nemotron-Super-49B-v1 recently announced by Nvidia.
It seems not to be yet supported by SGLang since `DeciLMForCausalLM`is not yet accepted by SGLang. See below.
Can you add corresponding support?
```
Scheduler hit an exception: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/sglang/srt/managers/scheduler.py", line 1748, in run_scheduler_process
scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, dp_rank)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/sglang/srt/managers/scheduler.py", line 218, in __init__
self.tp_worker = TpWorkerClass(
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/sglang/srt/managers/tp_worker_overlap_thread.py", line 63, in __init__
self.worker = TpModelWorker(server_args, gpu_id, tp_rank, dp_rank, nccl_port)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/sglang/srt/managers/tp_worker.py", line 74, in __init__
self.model_runner = ModelRunner(
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/sglang/srt/model_executor/model_runner.py", line 166, in __init__
self.initialize(min_per_gpu_memory)
File "/usr/local/lib/python3.12/site-packages/sglang/srt/model_executor/model_runner.py", line 176, in initialize
self.load_model()
File "/usr/local/lib/python3.12/site-packages/sglang/srt/model_executor/model_runner.py", line 361, in load_model
self.model = get_model(
^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/sglang/srt/model_loader/__init__.py", line 22, in get_model
return loader.load_model(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/sglang/srt/model_loader/loader.py", line 358, in load_model
model = _initialize_model(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/sglang/srt/model_loader/loader.py", line 137, in _initialize_model
model_class, _ = get_model_architecture(model_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/sglang/srt/model_loader/utils.py", line 37, in get_model_architecture
return ModelRegistry.resolve_model_cls(architectures)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/sglang/srt/models/registry.py", line 65, in resolve_model_cls
return self._raise_for_unsupported(architectures)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/sglang/srt/models/registry.py", line 32, in _raise_for_unsupported
raise ValueError(
ValueError: Model architectures ['DeciLMForCausalLM'] are not supported for now. Supported architectures: dict_keys(['BaichuanForCausalLM', 'ChatGLMModel', 'CohereForCausalLM', 'Cohere2ForCausalLM', 'DbrxForCausalLM', 'DeepseekForCausalLM', 'MultiModalityCausalLM', 'DeepseekV3ForCausalLMNextN', 'DeepseekV2ForCausalLM', 'DeepseekV3ForCausalLM', 'ExaoneForCausalLM', 'GemmaForCausalLM', 'Gemma2ForCausalLM', 'Gemma2ForSequenceClassification', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GraniteForCausalLM', 'Grok1ForCausalLM', 'Grok1ModelForCausalLM', 'InternLM2ForCausalLM', 'InternLM2ForRewardModel', 'LlamaForCausalLM', 'Phi3ForCausalLM', 'InternLM3ForCausalLM', 'LlamaForClassification', 'LlamaForCausalLMEagle', 'LlamaEmbeddingModel', 'MistralModel', 'LlamaForSequenceClassification', 'LlamaForSequenceClassificationWithNormal_Weights', 'LlavaLlamaForCausalLM', 'LlavaQwenForCausalLM', 'LlavaMistralForCausalLM', 'LlavaVidForCausalLM', 'MiniCPMForCausalLM', 'MiniCPM3ForCausalLM', 'MiniCPMV', 'MistralForCausalLM', 'MixtralForCausalLM', 'QuantMixtralForCausalLM', 'MllamaForConditionalGeneration', 'OlmoForCausalLM', 'Olmo2ForCausalLM', 'OlmoeForCausalLM', 'Phi3SmallForCausalLM', 'QWenLMHeadModel', 'Qwen2ForCausalLM', 'Qwen2_5_VLForConditionalGeneration', 'Qwen2ForCausalLMEagle', 'Qwen2MoeForCausalLM', 'Qwen2ForRewardModel', 'Qwen2VLForConditionalGeneration', 'StableLmForCausalLM', 'TorchNativeLlamaForCausalLM', 'TorchNativePhi3ForCausalLM', 'XverseForCausalLM', 'XverseMoeForCausalLM', 'YiVLForCausalLM'])
```
### Reproduction
Start SGLang and with `nvidia/Llama-3_3-Nemotron-Super-49B-v1` coming from HuggingFace
The message above will appear right after this command
### Environment
Amazon Linux 2023
SGLang 0.0.4.post1 = last officially published version as of this writing | 0easy
|
Title: Marketplace - Update the fonts on the list so that they follow the fonts on the mocks
Body:
### Describe your issue.
Can we update the fonts on the agents card so that they follow the fonts on the mocks? Name of font styles are in yellow.

I'm linking the typography sheet here: [https://www.figma.com/design/aw299myQfhiXPa4nWkXXOT/agpt-template?node-id=7-47&t=axoLiZIIUXifeRWU-1](url)
| 0easy
|
Title: confusing error when passing something.html as source
Body: ```yaml
- source: fit.html
product:
nb: output/nb.ipynb
model: output/model.pickle
```
```
(ploomber) Edu@MBP Desktop/mlx » ploomber task fit0 -f
Loading pipeline...
Error: Failed to determine task class for source 'fit.html': list indices must be integers or slices, not str.
```
The issue is because `fit.html` is interpreted as a dotted path, hence Ploomber runs `import fit` and then tries to look for a function named `html`. However, since `fit.py` is a script, it contains this:
```python
# %% tags=["parameters"]
upstream = ['join']
# %%
df = pd.read_parquet(str(upstream['join'])) # breaks here since upstream is a list
```
| 0easy
|
Title: [Storage] Typos in file_mount bucket paths have cryptic errors
Body: I have a bucket `romilb-notebooks` with a directory `my_dataset`. If I misspell the sub-path in my task file_mounts yaml (`my_datasets` instead of `my_dataset`):
```
file_mounts:
/mydata: gs://romilb-notebooks/my_datasets
```
`sky launch` fails with a cryptic error:
```
Traceback (most recent call last):
File "/Users/romilb/tools/anaconda3/bin/sky", line 8, in <module>
sys.exit(cli())
File "/Users/romilb/tools/anaconda3/lib/python3.9/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/Users/romilb/tools/anaconda3/lib/python3.9/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/utils/common_utils.py", line 368, in _record
return f(*args, **kwargs)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/cli.py", line 812, in invoke
return super().invoke(ctx)
File "/Users/romilb/tools/anaconda3/lib/python3.9/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/romilb/tools/anaconda3/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/romilb/tools/anaconda3/lib/python3.9/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/utils/common_utils.py", line 389, in _record
return f(*args, **kwargs)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/cli.py", line 1125, in launch
_launch_with_confirm(task,
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/cli.py", line 603, in _launch_with_confirm
sky.launch(
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/utils/common_utils.py", line 389, in _record
return f(*args, **kwargs)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/utils/common_utils.py", line 389, in _record
return f(*args, **kwargs)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/execution.py", line 468, in launch
return _execute(
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/execution.py", line 301, in _execute
backend.sync_file_mounts(handle, task.file_mounts,
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/utils/common_utils.py", line 389, in _record
return f(*args, **kwargs)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/utils/common_utils.py", line 368, in _record
return f(*args, **kwargs)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/backends/backend.py", line 73, in sync_file_mounts
return self._sync_file_mounts(handle, all_file_mounts, storage_mounts)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/backends/cloud_vm_ray_backend.py", line 3100, in _sync_file_mounts
self._execute_file_mounts(handle, all_file_mounts)
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/backends/cloud_vm_ray_backend.py", line 4526, in _execute_file_mounts
if storage.is_directory(src):
File "/Users/romilb/Romil/Berkeley/Research/sky-experiments/sky/cloud_stores.py", line 125, in is_directory
p = subprocess.run(command,
File "/Users/romilb/tools/anaconda3/lib/python3.9/subprocess.py", line 528, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'pushd /tmp &>/dev/null && { gcloud --help > /dev/null 2>&1 || { mkdir -p ~/.sky/logs && wget --quiet https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-424.0.0-linux-x86_64.tar.gz > ~/.sky/logs/gcloud_installation.log && tar xzf google-cloud-sdk-424.0.0-linux-x86_64.tar.gz >> ~/.sky/logs/gcloud_installation.log && rm -rf ~/google-cloud-sdk >> ~/.sky/logs/gcloud_installation.log && mv google-cloud-sdk ~/ && ~/google-cloud-sdk/install.sh -q >> ~/.sky/logs/gcloud_installation.log 2>&1 && echo "source ~/google-cloud-sdk/path.bash.inc > /dev/null 2>&1" >> ~/.bashrc && source ~/google-cloud-sdk/path.bash.inc >> ~/.sky/logs/gcloud_installation.log 2>&1; }; } && popd &>/dev/null && [[ "$(uname)" == "Darwin" ]] && skypilot_gsutil() { gsutil -m -o "GSUtil:parallel_process_count=1" "$@"; } || skypilot_gsutil() { gsutil -m "$@"; }; GOOGLE_APPLICATION_CREDENTIALS=~/.config/gcloud/application_default_credentials.json skypilot_gsutil ls -d gs://romilb-notebooks/my_datasets' returned non-zero exit status 1.
```
We should have a cleaner error message here. | 0easy
|
Title: upload-zone is hidden
Body: The interactable upload zones are unable to be actually interacted with because of their style properties:
```css
display: none !important;
min-height: 0px !important;
height: 0px !important;
```
I am running the container on a separate computer on the local network and accessing it from Firefox on the client computer.
I did not run into this issue when testing everything on one system.
I've provided a short video of the issue:
[upload-zone.webm](https://github.com/sergree/matchering/assets/79172597/fc69309f-1a0d-4f67-9b97-e2f53dc41cc5)
I'm not super familiar with Docker and I'm not sure how to get any debug information. | 0easy
|
Title: `Path` snapshotting support
Body: **Is your feature request related to a problem? Please describe.**
I would love to be able to make path snapshots and being able to write:
```python
def test_something(snapshot):
path: Path = do_something()
assert path == snapshot
```
Would you be open to a pull request bringing this feature ? Because I have one which is working just fine, covered in test...
This includes:
- both file and directory support
- support both filtering and matching
- proper diff with syntax highlighting using the same pattern as `pytest` (aka. support `pygments` as optional, so failsafe if it is not present)
Here's a sample syntax highlighted output:

**Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered**
Alternative: publish it separately, like an extension to the extension, something like `syrupy-path`. But I thought directory/file snapshotting is so common that it would make more sense to have this in `syrupy`
**Additional context**
Sorry to present it like that. In fact, I crafted my PR and then when rereading the contribution guide, I noticed [this paragraph](https://github.com/tophat/syrupy/blob/main/CONTRIBUTING.md#suggesting-enhancements) so here I am.
| 0easy
|
Title: [FEA] CLA
Body: We should probably start adopting some sort of CLA flow for PRs, eg, https://github.com/cla-assistant/cla-assistant . Ideally use across all repos.
Probably something along the lines of:
* Abide by code of conduct
* Patent & copywrite assignment, current & future
* Any other CYA important for users | 0easy
|
Title: Add support for SKIPPING cells' execution
Body: Hey,
First, great open source project.
I suggest the following feature:
Currently to execute the entire notebook we can do:
~~~python
def test_notebook(self):
notebook_path = "fuse_examples/tutorials/hello_world/hello_world.ipynb"
# Execute the whole notebook and save it as an object
with testbook(notebook_path, execute=True, timeout=600) as tb:
~~~
Now, if we want to execute only the first cell, we shall do:
~~~python
def test_notebook(self):
notebook_path = "fuse_examples/tutorials/hello_world/hello_world.ipynb"
# Execute the whole notebook and save it as an object
with testbook(notebook_path, execute=[0], timeout=600) as tb:
~~~
But what if we want to execute the **entire notebook besides the first cell ???**
Currently it has to be as follow:
~~~python
def test_notebook(self):
NUM_OF_CELLS = 'some number'
notebook_path = "fuse_examples/tutorials/hello_world/hello_world.ipynb"
# Execute the whole notebook and save it as an object
with testbook(notebook_path, execute=range(1, NUM_OF_CELLS + 1), timeout=600) as tb:
~~~
But A MUCH CLEANER APPROACH would be:
~~~python
def test_notebook(self):
notebook_path = "fuse_examples/tutorials/hello_world/hello_world.ipynb"
# Execute the whole notebook and save it as an object
with testbook(notebook_path, skip=[0], timeout=600) as tb:
~~~
| 0easy
|
Title: Support `Last-Modified` header for static files
Body: Render the `Last-Modified` header when serving static files. We are already performing `.stat()` on the open file handle, so we should have the required modification date.
When serving, also check the [If-Modified-Since](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If-Modified-Since) header, and render `HTTP 304 Not Modified` if appropriate.
This issue is very similar to #2243, but probably slightly easier, so maybe this one could be taken first. | 0easy
|
Title: Schema autogeneration from SQL complains about missing primary keys in tables one at a time
Body: Related to #649. Assume multiple tables in a database are missing primary keys, and the user is attempting to autogenerate a schema from that database.
The `MissingPrimaryKeyError` that will get raised will contain the information of only a single table that is missing a primary key. That means that the user will end up re-running the autogeneration process N times to discover N tables with missing primary keys, which is incredibly inconvenient.
The error message should contain all the problematic tables with missing primary keys, so the user can avoid all of them in one go. | 0easy
|
Title: Improve /transcribe
Body: - Fix issues with youtube downloading
- Add support for any audio/video file from the web (We download and store locally)
- Add translation support
- Add support for transcribing a voice channel
- Possibly of context menu transcribe? E.g if someone sends a YT video or an actual video or audio in the chat | 0easy
|
Title: Add optional timestamp to SystemPromptPart
Body: UserPromptPart, ToolReturnPart, RetryPromptPart all include a timestamp attribute.
As do TextPart, ToolCallPart.
SystemPromptPart is the only part type in the ModelRequestPart that does not have a timestamp.
There are times that we create a system prompt instance from a template, and this instantiation is done at particular point in time. There are also times when this happens in other systems, and that history is passed to our pydantic-ai service, and the timestamp is consider a required attribute as part of retaining an immutable timestamped sequence of entries related to a user/ai chat.
Can we please add
```
timestamp: datetime = field(default_factory=_now_utc)
```
to SystemPromptPart to bring it in line with the other parts | 0easy
|
Title: Marketplace - Can we change title to the Poppins font?
Body: ### Describe your issue.
Can we change this to the Poppins font?
Font: poppins
weight: semibold
size: 48px
line-height: 54px

| 0easy
|
Title: Store snapshot diff without storing previous snapshot
Body: **Is your feature request related to a problem? Please describe.**
Current API requires you you to store a previous snapshot before the next "diff" can be stored.
For example
```python
assert result == snapshot
assert result_after_something_happens == snapshot(diff=0)
```
If my test just depends on the diff snapshot (checking that the changes always remain the same) it's not possible to only snapshot the diff.
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the solution you'd like**
I'd be great if I could do something like this:
```python
assert result_after_something_happens == snapshot(diff=result)
```
| 0easy
|
Title: Committers metric API
Body: The canonical definition is here: https://chaoss.community/?p=3945 | 0easy
|
Title: Provide option to render more than one spinner at the same time
Body: Suppose you get multiple long tasks at hand.
It should be nice to be able to start more than one spinner before finishing the last one
take this as an example:
```
from halo import Hal
a = start_task_a()
a_spinner = Halo({'text': 'Do A', 'spinner': 'dots'})
a_spinner.start()
a_is_spinning = True
b = start_task_b()
b_spinner = Halo({'text': 'Do B', 'spinner': 'dots'})
b_spinner.start()
b_is_spinning = True
while a_is_spinning or b_is_spinning:
if a.is_finished():
a_spinner.succeed()
a_is_spinning = False
if b.is_finished():
b_spinner.succeed()
b_is_spinning = False
```
| 0easy
|
Title: "Hello World" for DBT Analytical Transformation Workflows
Body: Querying Augur requires a certain degree of learning time, and although there are a number of queries available for starting out in https://github.com/chaoss/augur-community-reports and blog posts, something like "DBT" could make this more systematic.
Overview: https://github.com/chipzx92/DataEngineeringI320/blob/main/docs/presentations/dbt_intro/index.md
Documentation Introduction: https://docs.getdbt.com/docs/introduction
This is issue is ONLY about creating a "Hello World" with these tools. | 0easy
|
Title: Improve pipeline plot
Body: The output plot is pretty boring:

We're using pygraphviz to generate it and it has lots of customization options to make it better.
Here's the code that generates the plot:
https://github.com/ploomber/ploomber/blob/068df147c2c9af0f998beb6128e13fffe16e86e3/src/ploomber/dag/dag.py#L787
https://github.com/ploomber/ploomber/blob/f6013f9b21ae644976c1696b1d989968d1042602/src/ploomber/dag/dag.py#L820
The fix involves checking the options we have in https://graphviz.org/ to make the plot prettier
To get started, you can clone one example and generate one plot: https://docs.ploomber.io/en/latest/user-guide/templates.html
one option we should have is to only show task names (hide products)
- [x] option to hide products
- [x] document option to hide products in the .rst docs
- [x] improve the plot style (colors, font size, etc)
| 0easy
|
Title: Marketplace - creator page - missing header
Body:
### Describe your issue.
We're missing a small header here that should say "About". It's the same font and styling as the "other links" text.
<img width="1445" alt="Screenshot 2024-12-16 at 17 22 44" src="https://github.com/user-attachments/assets/23d76e44-8a75-4eef-a3de-7bdc035ec111" />
It should look something like this:
<img width="1407" alt="Screenshot 2024-12-16 at 17 24 51" src="https://github.com/user-attachments/assets/618d0c11-991b-453c-93d9-006816ca0f04" />
Please check this Figma file for the specs:
https://www.figma.com/design/Ll8EOTAVIlNlbfOCqa1fG9/Agent-Store-V2?node-id=2121-1818&t=95EuovUwxruyYA6E-1
| 0easy
|
Title: Update GraphiQL
Body: as per
https://github.com/graphql-python/graphene-django/releases/tag/v2.12.0
https://github.com/graphql-python/graphene-django/releases/tag/v2.12.1
https://github.com/graphql-python/graphene-django/releases/tag/v2.13.0
This should also simplify the subscriptions logic a bit | 0easy
|
Title: Empty string is not allowed as ESCAPE for LIKE/ILIKE
Body: ### Describe the bug
PostgreSQL allows specifying an empty string as the ESCAPE clause for the LIKE/ILIKE operator to turn off escaping.
https://www.postgresql.org/docs/current/functions-matching.html#FUNCTIONS-LIKE
However, SQLAlchemy treats the empty string passed as an argument to like/ilike functions as `None`.
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
https://docs.sqlalchemy.org/en/20/core/sqlelement.html#sqlalchemy.sql.expression.ColumnElement.ilike
### SQLAlchemy Version in Use
latest
### DBAPI (i.e. the database driver)
psycopg2
### Database Vendor and Major Version
PostgreSQL
### Python Version
3.8
### Operating system
any
### To Reproduce
```python
self.assert_compile(
sql.column("foo").like("bar", escape=""),
"foo LIKE %(foo_1)s ESCAPE ''",
dialect=postgresql.dialect(),
)
```
### Error
There is no error, the output SQL is missing the `ESCAPE ''` clause.
### Additional context
https://github.com/sqlalchemy/sqlalchemy/pull/9908 | 0easy
|
Title: Retrieving crypto data from complementary data aggregation services / Coingecko?
Body: Hi,
Currently, cryptofeed is managing data coming from exchanges.
Please, is there any contraindication for proposing new data feeds like CoinMarketCap, CoinGecko, Nomics...
From these data feeds, I would like to get complementary 'global' crypto data like circulating supply, global trading volumes, dominance...
Could related PRs be accepted in cryptofeed?
If yes, what would be your recommandations?
I see 3 main points:
- switching from 'exchange' to 'feed' or creating a new type of feed like 'aggregator' (I come from cryptostore, and did not start to have a look into cryptofeed sources. I am referring here to a choice of words :)).
- introducing 'single' asset (not pairs)
- new data types: `circulating_supply`, `trading_volume`, `dominance`...
Thanks for your feedbacks.
Have a good day,
Bests, | 0easy
|
Title: `PlotItem.scene()` has a `QGraphicsScene` type hint instead of a `pyqtgraph.GraphicsScene`
Body: ### Short description
`PlotItem.scene()` has a type hint of `QGraphicsScene | None` instead of a `pyqtgraph.GraphicsScene`. This raises an error when trying to access signals from the scene, such as `sigMouseMoved`.
### Code to reproduce
This is an abridged snippet of code from the `crosshair.py` example:
```python
import pyqtgraph as pg
def mouseMoved(evt):
pass
p = pg.PlotItem()
p.scene().sigMouseMoved.connect(mouseMoved)
# ^^^^^^^^^^^^^ Cannot access attribute "sigMouseMoved" for class "QGraphicsScene"
# Attribute "sigMouseMoved" is unknown
```
### Proposed Solution
I think this can be fixed by adding a type hint to the `GraphicsWidget` class:
```python
from typing import TYPE_CHECKING
class GraphicsWidget(GraphicsItem, QtWidges.QGraphicsWidget):
# ...
if TYPE_CHECKING:
def scene(self) -> pyqtgraph.GraphicsScene: ...
```
If this seems reasonable, I can draft a PR.
### Tested environment(s)
* PyQtGraph version: 0.13.5 and 0.13.7
* Qt Python binding: PyQt6
* Python version: 3.12.2
* NumPy version: 1.26.4
* Operating system: Windows 10
* Installation method: Poetry
| 0easy
|
Title: Split Transform Primitives out into separate files
Body: - `transform_primitive.py` is becoming very large. I suggest splitting out into separate files.
- We could split this file up by groups (such as LatLong transform primitives in 1 file). | 0easy
|
Title: Tidy up tables used in Pydantic tests
Body: We migrated the Pydantic code from Piccolo API - and now there's quite a mishmash of different tables in there (some are movie based, some are music based, and some are computer based). It might be worth standardising them a bit (for example, using music based tables like the rest of the tests).
https://github.com/piccolo-orm/piccolo/blob/master/tests/utils/test_pydantic.py
Not urgent, but a good first issue. | 0easy
|
Title: Request: Add sameSite parameter to unset_cookie
Body: It would be appreciated if I could use the _unset_cookie_ method to unset a cookie I created with the parameter _sameSite='None'_. Otherwise, I'm forced to use _set_cookie_ for the sole purpose of unsetting that cookie...
With current behaviour, _sameSite_ is set to 'Lax' by default and so the browser ignores it because I'm using different domains for front and backend, and so using Cross Site requests.
Thank you | 0easy
|
Title: Per-fle language configuration fails if there are two or more spaces after `Language:` prefix
Body: [Example_localized_files.zip](https://github.com/user-attachments/files/16929400/Example_localized_files.zip)
If we have two spaces after `Language:` the language is not recognized and the test run fails.
Attached is Finnish localized test suites with and without the extra space.
Windows 10, Python 3.10,9 Robot Framework v7.1rc2. | 0easy
|
Title: Issue with RWI (in dev branch)
Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.**
I'm using the latest development branch from github
After I upgraded python and corresponding packages, I got the following issue:
`TypeError: unsupported operand type(s) for *: 'NoneType' and 'float'
`
This was the result of calling RWI, I ensured my Close, High, Low do not contain any non numbers.
I checked the code and I found that inside RWI you have:
```python
atr_ = atr(
high=high, low=low, close=close,
length=length, mamode=mamode, talib=mode_tal
)
denom = atr_ * (length ** 0.5)
```
The issue is that due to the window length the first several values of atr will not be numbers. Perhaps this causes the issue? This could be related to how fillna is deprecated now? | 0easy
|
Title: Tox 4.0.13: .tox/bin/python: No module named tox
Body: ## Issue
Starting some days ago, all my CI tests are failing due to `.tox/bin/python: No module named tox`, inside the tox created env.
Full output bellow, and complete results can be found in: https://github.com/sparc4-dev/astropop/actions/runs/3722994922/jobs/6314233756
All tests where passing last week. No change in the [tox.ini](https://github.com/sparc4-dev/astropop/blob/main/tox.ini) was made.
## Environment
Provide at least:
- OS: Ubuntu
- `pip list` of the host Python where `tox` is installed:
```console
Package Version
------------- -------
cachetools 5.2.0
chardet 5.1.0
colorama 0.4.6
distlib 0.3.6
filelock 3.8.2
packaging 22.0
pip 22.3.1
platformdirs 2.6.0
pluggy 1.0.0
pyproject_api 1.2.1
setuptools 65.5.0
tox 4.0.13
```
## Output of running tox
```console
Run tox -v -e py311-test-devdeps-cat
ROOT: will run in automatically provisioned tox, host /opt/hostedtoolcache/Python/3.[11](https://github.com/sparc4-dev/astropop/actions/runs/3722994922/jobs/6314233692#step:8:12).1/x64/bin/python is missing [requires (has)]: tox-pypi-filter>=0.[12](https://github.com/sparc4-dev/astropop/actions/runs/3722994922/jobs/6314233692#step:8:13)
ROOT: find interpreter for spec PythonSpec(path=/opt/hostedtoolcache/Python/3.11.1/x64/bin/python)
ROOT: proposed PythonInfo(spec=CPython3.11.1.final.0-64, exe=/opt/hostedtoolcache/Python/3.11.1/x64/bin/python, platform=linux, version='3.11.1 (main, Dec 8 2022, 07:18:22) [GCC 11.3.0]', encoding_fs_io=utf-8-utf-8)
ROOT: will run in a automatically provisioned python environment under /home/runner/work/astropop/astropop/.tox/.tox/bin/python
ROOT: create virtual environment via CPython3Posix(dest=/home/runner/work/astropop/astropop/.tox/.tox, clear=False, no_vcs_ignore=False, global=False)
ROOT: add seed packages via FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/runner/.local/share/virtualenv)
ROOT: add activators for Bash, CShell, Fish, Nushell, PowerShell, Python
ROOT: provision> .tox/.tox/bin/python -m tox -v -e py311-test-devdeps-cat
/home/runner/work/astropop/astropop/.tox/.tox/bin/python: No module named tox
```
| 0easy
|
Title: FIX: (ENH) Fix landing page for https://arm-doe.github.io/pyart/
Body: The very first image we see on https://arm-doe.github.io/pyart/ is using the Jet Colormap. We need to fix this | 0easy
|
Title: [MNT] Replace assertions in networks module with raising Value errors
Body: ### Describe the issue
In all networks module, the parameters handled as input lists have an assertion for the length of the list, but assertions are not a good idea, they should be replaced by raise ValueError message
### Suggest a potential alternative/fix
Example
instead of:
```Python
assert len(self.strides) == self.n_conv_per_residual_block
```
It should be :
```Python
if len(self.strides) != self.n_conv_per_residual_block:
raise ValueError(
f"Number of strides {len(self.strides)} should be"
f" the same as number of convolution layers per block but is"
f" not: {self.n_conv_per_residual_block}."
)
```
### Additional context
_No response_ | 0easy
|
Title: websockets.exceptions.ConnectionClosedError - while performing simple load test on webscokets
Body: I'm performing simple load test on FastApi websockets using [artillery](https://www.artillery.io/) and I'm seeing below error sometimes (can't really say this error is because of increase in load, sometimes I see this error for very first connection also)
**error**
```py
websockets.exceptions.ConnectionClosedError: received 1005 (no status code [internal]); then sent 1005 (no status code [internal])
```
**Code**
```py
from typing import List
import uvicorn
from fastapi import FastAPI, WebSocket
app = FastAPI()
class ConnectionManager:
def __init__(self):
self.active_connections: List[WebSocket] = []
async def connect(self, websocket: WebSocket):
await websocket.accept()
self.active_connections.append(websocket)
def disconnect(self, websocket: WebSocket):
self.active_connections.remove(websocket)
async def send_personal_message(self, message: str, websocket: WebSocket):
await websocket.send_text(message)
async def broadcast(self, message: str):
for connection in self.active_connections:
await connection.send_text(message)
@app.websocket('/ws')
async def stt(ws: WebSocket):
await manager.connect(ws)
print("Connection accepted")
message = await ws.receive_text()
print(f"received message: {message}")
await ws.send_text(f"You said: {message}")
if __name__ == '__main__':
manager = ConnectionManager()
uvicorn.run(app=app, port=8082)
```
So here getting the above error while sending back received text back to client, at below line
```py
await ws.send_text(f"You said: {message}")
```
**Complete Stack trace**
```py
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/Users/swastikn/my-projects/speech_to_text/venv/lib/python3.7/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 254, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
File "/Users/swastikn/my-projects/speech_to_text/venv/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/Users/swastikn/my-projects/speech_to_text/venv/lib/python3.7/site-packages/fastapi/applications.py", line 270, in __call__
await super().__call__(scope, receive, send)
File "/Users/swastikn/my-projects/speech_to_text/venv/lib/python3.7/site-packages/starlette/applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/swastikn/my-projects/speech_to_text/venv/lib/python3.7/site-packages/starlette/middleware/errors.py", line 149, in __call__
await self.app(scope, receive, send)
File "/Users/swastikn/my-projects/speech_to_text/venv/lib/python3.7/site-packages/starlette/middleware/exceptions.py", line 51, in __call__
await self.app(scope, receive, send)
File "/Users/swastikn/my-projects/speech_to_text/venv/lib/python3.7/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/Users/swastikn/my-projects/speech_to_text/venv/lib/python3.7/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/Users/swastikn/my-projects/speech_to_text/venv/lib/python3.7/site-packages/starlette/routing.py", line 680, in __call__
await route.handle(scope, receive, send)
File "/Users/swastikn/my-projects/speech_to_text/venv/lib/python3.7/site-packages/starlette/routing.py", line 334, in handle
await self.app(scope, receive, send)
File "/Users/swastikn/my-projects/speech_to_text/venv/lib/python3.7/site-packages/starlette/routing.py", line 81, in app
await func(session)
File "/Users/swastikn/my-projects/speech_to_text/venv/lib/python3.7/site-packages/fastapi/routing.py", line 287, in app
await dependant.call(**values)
File "/Users/swastikn/my-projects/speech_to_text/sst/ws.py", line 35, in stt
await ws.send_text(f"You said: {message}")
File "/Users/swastikn/my-projects/speech_to_text/venv/lib/python3.7/site-packages/starlette/websockets.py", line 163, in send_text
await self.send({"type": "websocket.send", "text": data})
File "/Users/swastikn/my-projects/speech_to_text/venv/lib/python3.7/site-packages/starlette/websockets.py", line 85, in send
await self._send(message)
File "/Users/swastikn/my-projects/speech_to_text/venv/lib/python3.7/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 327, in asgi_send
await self.send(data) # type: ignore[arg-type]
File "/Users/swastikn/my-projects/speech_to_text/venv/lib/python3.7/site-packages/websockets/legacy/protocol.py", line 635, in send
await self.ensure_open()
File "/Users/swastikn/my-projects/speech_to_text/venv/lib/python3.7/site-packages/websockets/legacy/protocol.py", line 953, in ensure_open
raise self.connection_closed_exc()
websockets.exceptions.ConnectionClosedError: received 1005 (no status code [internal]); then sent 1005 (no status code [internal])
INFO: connection closed
```
**artillery config.yml**
```yml
config:
target: "ws://127.0.0.1:8082/ws"
ws:
maxRedirects: 25
phases:
- duration: 3
arrivalRate: 1
rampTo: 3
name: "Max load"
scenarios:
- engine: "ws"
flow:
- send: "hello world"
```
**Note**
1. `uvicorn==0.19.0` (can reproduce issue with newer version **0.21.1** also)
2. `fastapi==0.86.0`
3. `websockets==10.4`
<!-- POLAR PLEDGE BADGE START -->
> [!IMPORTANT]
> - We're using [Polar.sh](https://polar.sh/encode) so you can upvote and help fund this issue.
> - We receive the funding once the issue is completed & confirmed by you.
> - Thank you in advance for helping prioritize & fund our backlog.
<a href="https://polar.sh/encode/uvicorn/issues/1910">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/encode/uvicorn/issues/1910/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/encode/uvicorn/issues/1910/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
| 0easy
|
Title: Add option to customise names of grid tasks.
Body: When running tasks in a grid, a number is appended at the end of the tasks product names. For example, the following pipeline.yaml file:
```
- source: example_task.py
name: example-task
product:
data: output/output_dataframe.csv
grid:
input_dataframe: ['birds.csv', 'fish.csv', 'flowers.csv']
```
would result in 3 products: output_dataframe-1.csv, output_dataframe-2.csv and output_dataframe-3.csv.
I would like to have an option to replace -1, -2 and -3 in the filenames with for example -birds, -fish and -flowers. | 0easy
|
Title: fix return type of get_label_quality_scores in token classification
Body: The function only returns a `Tuple[np.ndarray, list]`, but it is annotated with:
https://github.com/cleanlab/cleanlab/blob/fad4eb266dee8b9e2925d3f0d74fe4a81939eb8a/cleanlab/token_classification/rank.py#L36 | 0easy
|
Title: Make on Windows link not working
Body: Thanks to @epogrebnyak for reporting. | 0easy
|
Title: [DOC] References to aeon.utils.registry.COLLECTIONS_DATA_TYPES
Body: ### Describe the issue linked to the documentation
There are several references in the docs to
`aeon.utils.registry.COLLECTIONS_DATA_TYPES`
in the docs. This does not exist. I think it should be just
`aeon.utils.COLLECTIONS_DATA_TYPES`

### Suggest a potential alternative/fix
_No response_ | 0easy
|
Title: Add basic CI
Body: We already have some scripts in the `scripts` directory. It would be good to run them for every PR.
This is also a great task for beginners. | 0easy
|
Title: [FEATURE]: All contributors bot
Body: ### Feature summary
integrating https://allcontributors.org/ bot
### Feature description
integrating bot for generating contributors
https://github.com/all-contributors/all-contributors
### Motivation
_No response_
### Alternatives considered
_No response_
### Additional context
_No response_ | 0easy
|
Title: Technical Fork metric API
Body: The canonical definition is here: https://chaoss.community/?p=3431 | 0easy
|
Title: Help in version 3.0: Trape is yours.
Body: It has been a pleasure for me to contribute to all of you 2 versions of this tool.
I've been working on other open source projects that I'm about to release for you, so I haven't finished trape version 3.0.
But, I invite you and the entire community that has used this project, to collaborate with some lines of code, implementing your own ideas and improving trape, turning it into a project of all, for everyone. | 0easy
|
Title: DOC: remove the `readthedocs` related content from dev docs
Body: Since we are going to remove `readthedocs` from our CI workflow( #463 ), we need also to remove `readthedocs` related content from dev docs. | 0easy
|
Title: uvicorn may respond to requests sent after the client asks for the connection to be closed
Body: ### Discussed in https://github.com/encode/uvicorn/discussions/2234
<div type='discussions-op-text'>
<sup>Originally posted by **kenballus** January 28, 2024</sup>
### Describe the bug
From RFC 9112, section 9.6:
> A server that receives a "close" connection option MUST initiate closure of the connection (see below) after it sends the final response to the request that contained the "close" connection option. The server SHOULD send a "close" connection option in its final response on that connection. The server MUST NOT process any further requests received on that connection.
When uvicorn receives a pipeline with a request containing `Connection: close`, followed by an invalid request, uvicorn responds only to the second (invalid) request, even though the standard requires that uvicorn respond only to the first one.
### To Reproduce
1. Start the example server from the README.
2. Send it a pipeline consisting of a valid request with `Connection: close` set, followed by an invalid request:
```sh
printf 'GET / HTTP/1.1\r\nConnection: close\r\n\r\nInvalid\r\n\r\n' | nc localhost 8080
```
3. Observe that the only response received is intended for the invalid request:
```
HTTP/1.1 400 Bad Request
content-type: text/plain; charset=utf-8
Transfer-Encoding: chunked
Connection: close
1e
Invalid HTTP request received.
0
```
### Expected behavior
The server should respond only to the first request, and then close the connection.
### Logs/tracebacks
```python-traceback
INFO: 127.0.0.1:51922 - "GET / HTTP/1.1" 200 OK
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/uvicorn/protocols/http/h11_impl.py", line 404, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/example.py", line 4, in app
await send({
File "/usr/local/lib/python3.11/dist-packages/uvicorn/protocols/http/h11_impl.py", line 486, in send
output = self.conn.send(event=response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/h11/_connection.py", line 512, in send
data_list = self.send_with_data_passthrough(event)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/h11/_connection.py", line 537, in send_with_data_passthrough
self._process_event(self.our_role, event)
File "/usr/local/lib/python3.11/dist-packages/h11/_connection.py", line 272, in _process_event
self._cstate.process_event(role, type(event), server_switch_event)
File "/usr/local/lib/python3.11/dist-packages/h11/_state.py", line 293, in process_event
self._fire_event_triggered_transitions(role, _event_type)
File "/usr/local/lib/python3.11/dist-packages/h11/_state.py", line 311, in _fire_event_triggered_transitions
raise LocalProtocolError(
h11._util.LocalProtocolError: can't handle event type Response when role=SERVER and state=MUST_CLOSE
```
### Python Version
```console
$ python --version
Python 3.11.2
```
### uvicorn Version
```console
$ python -m pip show uvicorn
Name: uvicorn
Version: 0.27.0
Summary: The lightning-fast ASGI server.
Home-page:
Author:
Author-email: Tom Christie <tom@tomchristie.com>
License:
Location: /usr/local/lib/python3.11/dist-packages
Requires: click, h11
Required-by:
```
### h11 Version
```console
$ python -m pip show h11
Name: h11
Version: 0.14.0
Summary: A pure-Python, bring-your-own-I/O implementation of HTTP/1.1
Home-page: https://github.com/python-hyper/h11
Author: Nathaniel J. Smith
Author-email: njs@pobox.com
License: MIT
Location: /usr/local/lib/python3.11/dist-packages
Requires:
Required-by: uvicorn
```
### OS
Debian 12 (running in Docker on Arch Linux)
Linux 6.7.2
### Additional context
Some other HTTP implementations that handle this correctly:
Apache httpd, Boost::Beast, Daphne, H2O, Lighttpd, Nginx, Tornado, OpenWrt uhttpd, Waitress
Some other HTTP implementations that also have this bug:
Mongoose, aiohttp</div>
<!-- POLAR PLEDGE BADGE START -->
> [!IMPORTANT]
> - We're using [Polar.sh](https://polar.sh/encode) so you can upvote and help fund this issue.
> - We receive the funding once the issue is completed & confirmed by you.
> - Thank you in advance for helping prioritize & fund our backlog.
<a href="https://polar.sh/encode/uvicorn/issues/2238">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/encode/uvicorn/issues/2238/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/encode/uvicorn/issues/2238/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
| 0easy
|
Title: Buttons in dialogs created by `Dialogs` should get keyboard shortcuts
Body: #4619 proposed binding the `<Enter>` key to the `OK` button. That's a good idea, but we could also easily add separate keyboard shortcuts to all buttons. We can simply bind the first letter of each button (`OK`, `Cancel`, `PASS`, `FAIL`) to the same callback as the button. Should bind both the upper and the lower case versions so that pressing both `O` and `o` keys is the same as pressing the `OK` button. | 0easy
|
Title: Make inactive products and categories identifiable in dashboard list views
Body: Currently, non-public products are not easy to identify in the dashboard list views - there is nothing to distinguish them from public products and you have to view the edit form to determine whether the item is public or not.
I think it would be useful to have a column in the table to indicate whether the product is active. Same applies for categories. | 0easy
|
Title: GET requests are sending body as `{}` and Content-Length equals 2
Body: On GET every request ScanAPI is sending body as `{}` and Content-Length equals `2`. The body should be None and the Content-Length should be `0`.
A small hack to fix this while we don't have a new version is to have an empty body in the GET request specification, like this:
```yaml
path: /example
method: get
body:
```
Related to: https://github.com/scanapi/scanapi/issues/237 | 0easy
|
Title: Add Deribit Funding from TICKER
Body: Derbit has the funding rate in their Ticker channel. It would be good to extract both the current and 8 hour funding.
{
"jsonrpc": "2.0",
"id": 8106,
"result": {
"best_ask_amount": 3010
"best_ask_price": 3562.75,
"best_bid_amount": 135710,
"best_bid_price": 3562.5,
**"current_funding": 0,
"funding_8h": 0.00018344,**
"index_price": 3563.53,
"instrument_name": "BTC-PERPETUAL",
"last_price": 3562.25,
"mark_price": 3562.62,
"max_price": 3598.21,
"min_price": 3526.96,
"open_interest": 23126.924145440054,
"settlement_price": 3573.77,
"state": "open",
"stats": {
"volume": 85934.31950267,
"low": 3492,
"high": 3660.25
},
"timestamp": 1550153198202
} | 0easy
|
Title: ElementAPI compatible with item getters
Body: Please, provide access to HTML attributes through item getters.
The following code makes three assertions of which only one works.
```
import operator
import splinter
b = splinter.Browser()
b.visit('https://www.google.com/')
q = b.find_by_name('q').first
assert 'q' == q.get('name')
assert 'q' == operator.itemgetter('name')(q)
assert 'q' == q['name']
```
_I would write the change myself but I have been unable to properly run the test-suite._ | 0easy
|
Title: Extend unit tests with Hypothesis framework
Body: <!-- Instructions For Filing a Bug: https://github.com/giotto-learn/giotto-learn/blob/master/CONTRIBUTING.rst -->
#### Description
<!-- Example: Joblib Error thrown when calling fit on VietorisRipsPersistence
-->
Instead of using hard-coded examples in our unit tests, we could automatically generate a range of "strategies" using the Hypothesis test framework:
* [Hypothesis](https://hypothesis.readthedocs.io/en/latest/index.html)
* [hypothesis-gufunc](https://github.com/uber/hypothesis-gufunc)
This would allow us to catch a wider range of edge cases.
| 0easy
|
Title: URL shortener integration
Body: | 0easy
|
Title: [功能异常] 油猴脚本失效
Body: **问题描述**
油猴脚本失效
A clear and concise description of what the bug is.
**重现步骤**
重现该问题的步骤:
Steps to reproduce the behavior:
1. ...
2. ...
3. ...
**预期结果**
清晰简洁地描述您预期会发生的情况。
A clear and concise description of what you expected to happen.
**补充信息**
在此添加有关该问题的任何其他上下文信息,例如:操作系统、运行方式、配置文件、错误截图、运行日志等。
请注意:提供配置文件时,请删除 Cookie 内容,避免敏感数据泄露!
Add any other contextual information about the issue here, such as operating system, runtime mode, configuration files,
error screenshots, runtime logs, etc.
Please note: When providing configuration files, please delete cookie content to avoid sensitive data leakage!
| 0easy
|
Title: Add warning in crud docs - crud_entity.get and crud_entity.get_multi now return Row
Body: Add warning should be added in the CRUD section in readme:
- `crud_entity.get` and `crud_entity.get_multi` now return `sqlalchemy.engine.row.Row` instead of an `ORM` model | 0easy
|
Title: Replace ugettext_lazy with gettext_lazy
Body: PyTest throws multiple warnings with:
`RemovedInDjango40Warning: django.utils.translation.ugettext_lazy() is deprecated in favor of django.utils.translation.gettext_lazy().` | 0easy
|
Title: goals on nutritional plan remain set
Body: ## Steps to Reproduce
1. create nutritional plan
2. under edit, "add goals to this plan"
3. add a number under protein
4. save
5. edit the plan again, and UNCHECK "add goals to this plan"
6. save again
7. open "edit" again. "add goals to this plain" is still checked with the goal intact. it should be unchecked
i think the proper fix is, when you uncheck the "add goals", is to unset all goals fields - that's what the flutter app does.
| 0easy
|
Title: BUG: Running into issues with `RadarDisplay.plot_vpt` time axis
Body: When setting `time_axis_flag` = True within `plot_vpt`, Matplotlib runs into error stating:
```python
TypeError: Dimensions of C (600, 1736) are incompatible with X (1736) and/or Y (601); see help(pcolormesh)
```
@zssherman mentioned that he had fixed this issue in another plotting view (#1025), which provides some hints on how to fix this issue.. | 0easy
|
Title: Lower and Upper BBands Reversed
Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.**
0.2.83b0
**Describe the bug**
In commit 66890ad3fce61f17dd50e728d103a22e5788f8e8, the lower and upper bbands were reversed.
https://github.com/twopirllc/pandas-ta/blob/main/pandas_ta/volatility/bbands.py#L24 should be
upper, mid, lower = BBANDS(close, length) as seen in the ta-lib documentation, https://mrjbq7.github.io/ta-lib/func_groups/overlap_studies.html.
| 0easy
|
Title: Incomplete example about query variables
Body: In the docs, there's an example about how to feed variables to a query [(link)](https://docs.graphene-python.org/en/latest/execution/execute/)
However, the example is incomplete as there is nothing demonstrating how to consume the variables in the resolver:
class Query(graphene.ObjectType):
user = graphene.Field(User)
def resolve_user(self, info):
return info.context.get('user')
schema = graphene.Schema(Query)
result = schema.execute(
'''query getUser($id: ID) {
user(id: $id) {
id
firstName
lastName
}
}''',
variables={'id': 12},
) | 0easy
|
Title: Typo in Docs: atrrs not an argument of visualize_tokens function(attrs rather)
Body: Awesome tool @ines . Just a slight typo in the docs for the `visualize_tokens()` function.
In the example `attrs` is spelt as `atrrs`. We need to make some slight changes.

Great Work!!. Thanks | 0easy
|
Title: Configuring ranges does not allow excluding products through the UI
Body: The range model allows exclusion of specific products:
https://github.com/django-oscar/django-oscar/blob/151dc05395865ec0f51cdd45be9970a1bbea2f0c/src/oscar/apps/offer/abstract_models.py#L854
However, you can not configure those through the dashboard UI, while it's a pretty useful feature.
You can configure single included products, with the "Save and edit products" button;
<img width="349" alt="image" src="https://user-images.githubusercontent.com/7702200/220860699-86ca45f9-8e73-4648-b978-8a4b242c7708.png">
We could also allow users to exclude products in this view, though we would probably need to make the view UI a bit clearer here, as to me it already looks a bit clumsy.
A separate view would do as well, but it should inherit the logic from the edit products view, as the only difference should be that it saves the products in the `excluded_products` field rather than the `included_products` field | 0easy
|
Title: `Message.id` broken if parent is not `Keyword` or `ExecutionErrors`
Body: ```python
>>> from robot.result import Keyword, TestCase, Var
>>> from robot.result import Keyword, TestCase, While
>>>
>>> Keyword().body.create_message().id
'k1-m1'
>>> TestCase().body.create_message().id
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/peke/Devel/robotframework/src/robot/model/message.py", line 64, in id
messages = self.parent.messages
^^^^^^^^^^^^^^^^^^^^
AttributeError: 'TestCase' object has no attribute 'messages'. Did you mean: 'message'?
>>> While().body.create_message().id
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/peke/Devel/robotframework/src/robot/model/message.py", line 64, in id
messages = self.parent.messages
^^^^^^^^^^^^^^^^^^^^
AttributeError: 'While' object has no attribute 'messages'. Did you mean: 'message'?
```
The reason is that the `Message.id` property tries to access `parent.messages` which most of the possible parent objects don't have. This is easy to fix by using `parent.body.filter(messages=True)` in that case.
Apparently our code nor nobody else has used `Message.id`, because our tests pass and nobody has reported this. Instead of fixing `Message.id` we could simply remove it, but I believe its good that all body items have `id`. | 0easy
|
Title: 服务器迁移如何备份恢复
Body: 作者你好,使用你这个发卡系统后,觉得非常好用,我现在是用宝塔的docker搭建的,就是想问下,如果我更换服务器了,怎么备份原来的数据,还原到新的服务器上,我在后台看到只有恢复订单数据的。谢谢 | 0easy
|
Title: [feature] update metadata on ec2 spot termination for batch
Body: **Description**: When a step is executing on AWS Batch with `@batch`, and if it is a spot instance, the metadata of the run should dynamically be updated to include the spot termination notice when the instance is interrupted. Check the screenshot below for desired results.
<img width="378" alt="Image" src="https://github.com/user-attachments/assets/aa72bfb4-789a-4d61-9051-b5d2040c7851" />
Reference implementation for `@kubernetes` and argo-workflows here: https://github.com/Netflix/metaflow/pull/2207 | 0easy
|
Title: create a deepcopy of TaskSpec init params
Body: `data`, and `meta` params in the TaskSpec constructor are dictionaries: https://github.com/ploomber/ploomber/blob/c87c60db954f72309b1e421a5399f9f6a426e5fe/src/ploomber/spec/taskspec.py#L166
Since both of them are modified inside the class, we should call `copy.deepcopy` to prevent side-effects outside the class
https://github.com/ploomber/ploomber/blob/master/src/ploomber/spec/taskspec.py#L166 | 0easy
|
Title: "ploomber scaffold" should create missing modules when scaffolding functions
Body: If `ploomber scaffold` finds a `pipeline.yaml` it checks all `tasks[*].sources` and creates files for all tasks whose source is missing. e.g.,
```yaml
tasks:
- source: some_module.some_function
product: output.csv
```
If `some_module.py` exists, `ploomber scaffold` adds a function definition there. However, if it doesn't, it fails.
Calling the command should create all necessary modules. Note that this could be a nested module. e.g.,
```yaml
tasks:
- source: some_module.submodule.function
product: output.csv
```
This must create `some_module/` (directory), `some_module/__init__.py`, and `some_module/submodule.py`, the call the already implemented logic to create a function inside `some_module/submodule.py`.
`loader.create` implements the file creation logic: https://github.com/ploomber/ploomber/blob/ac915ea998f4da0177204f2189f8f608f3404fc6/src/ploomber/scaffold/__init__.py#L42
add tests here: https://github.com/ploomber/ploomber/blob/master/tests/cli/test_scaffold.py
## Tasks
- [x] Implement 'skip' logic to DAGSpec and TAskSpec: passing `lazy_import='skip'` should not perform any dotted paths validation and add tests with `lazy_import='skip'`
- [x] Implement `ploomber scaffold` feature that creates missing modules
- [ ] Rename `lazy_import` flag (not sure about what would be a good name, maybe `import_mode`?)
Please open a PR with `scaffold-functions` as the base branch after finishing any of the three tasks above so we can review it and move to the next one.
| 0easy
|
Title: [Feature] the stream request returns data without usage.token data
Body: ### Checklist
- [ ] 1. If the issue you raised is not a feature but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [ ] 2. Please use English, otherwise it will be closed.
### Motivation
The cost is calculated through the usage.token returned by the HTTP response. Non-stream requests have a usage.token cost, but the returned usage.token for stream results is empty.
🚀 Is it possible to add the number of tokens when the stream returns data? 🚀
tks.
```
data: {"id":"e1b9eecee85b4379a5806f53b6d9ec94","object":"chat.completion.chunk","created":1740117922,"model":"/cephfs/public_model/DeepSeek-R1","choices":[{"index":0,"delta":{"role":null,"content":"。","tool_calls":null},"logprobs":null,"finish_reason":"","matched_stop":null}],"usage":null}
data: {"id":"e1b9eecee85b4379a5806f53b6d9ec94","object":"chat.completion.chunk","created":1740117922,"model":"/cephfs/public_model/DeepSeek-R1","choices":[{"index":0,"message":{"role":"","content":""},"delta":{"content":""},"finish_reason":"stop"}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0,"prompt_tokens_details":{"cached_tokens":0},"prompt_cache_hit_tokens":0,"prompt_cache_miss_tokens":0},"system_fingerprint":"","timings":{"prompt_n":0,"prompt_ms":0,"prompt_per_token_ms":0,"prompt_per_second":0,"predicted_n":0,"predicted_ms":0,"predicted_per_token_ms":0,"predicted_per_second":0}}
data: [DONE]
```
### Related resources
_No response_ | 0easy
|
Title: [ENH] address `numpy 2` deprecations
Body: Current test execution causes these warnings:
`Warning: __array_wrap__ must accept context and return_scalar arguments (positionally) in the future. (Deprecated NumPy 2.0)`
We should diagnose the source and address the deprecation - noting that code should work under `numpy 1` as well as `numpy 2`. | 0easy
|
Title: Marketplace - margin between breadcrumb & nav bar
Body: ### Describe your issue.
<img width="896" alt="Screenshot 2024-12-17 at 19 16 22" src="https://github.com/user-attachments/assets/5d89e687-9297-4a3a-b736-b182b8d3a651" />
This element appears in both the agent page and the creator page, please reduce the margin between these two elements
| 0easy
|
Title: CategoricalVariableImputer
Body: Allow user to select the string with which to replace missing data. At the moment, the only value allowed is the default one == 'missing'
| 0easy
|
Title: [Serve] Display error message on old service before update mode is introduced.
Body: <!-- Describe the bug report / feature request here -->
Seems like in the following code, we just silently ignored the `mode` argument.
https://github.com/skypilot-org/skypilot/blob/d5b6d89c83ea1ee7258f68314da4c6f8add83e04/sky/serve/serve_utils.py#L952-L966
Expected behaviour: if a user specified a rolling update on an old service, we should raise an error says that rolling update is not supported on this service.
| 0easy
|
Title: [Usage]: Clean up Engine Args & Documentation
Body: ### Your current environment
Currently vLLM has a lot of engine arguments listed here https://docs.vllm.ai/en/latest/serving/engine_args.html. Over time as we add more and more features to vLLM, this list will be less maintainable and user friendly.
### How would you like to use vllm
As a first step to clean up these args, they should be made **hierarchical** (for example, `--compilation-config`).
The documentation should also be updated so that engine arg documentations are **arranged in sections instead of in a flatten list**.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | 0easy
|
Title: CaptureAsImage results a black image for second monitor (maybe re-use pyWin32?)
Body: In a simple test, using Notepad as an example, CaptureAsImage method results a image file fully black
as shown below:

The code that I'm using is:
from pywinauto.application import Application
app = Application().start('notepad.exe')
app.top_window_().set_focus().CaptureAsImage().save("d:\img.png")
Thank you
| 0easy
|
Title: Web-gui: CNAME content does not allow leading asterisk or underscore
Body: When trying to create CNAME records the web-gui denies using asterisk or underscore as a leading character in the content.
Failing examples:
```
_DMARC 3600 IN CNAME _DMARC.example.org.
*.udp 3600 IN CNAME *.tcp.example.org.
``` | 0easy
|
Title: Include a filter to see only unpublished posts in admin
Body: | 0easy
|
Title: Accessing `id` property of model objects may cause `ValueError`
Body: With 6.1a1 (not seen with 6.0.1), the following construct no longer works to retrieve the currently executed keyword's ID from the stack IF the keyword is executed via `Run Keyword ...` variants:
```
for stack in inspect.stack():
frame = inspect.getargvalues(stack[0])
if getattr(frame, 'locals', None) and frame.locals.get('step'):
return frame.locals['step'].id
```
This can easily be reproduced using the following test suite and custom library:
```
*** Settings ***
Library testlib.py
*** Test Cases ***
# the first test passes, the two remaining fail
Test KW
Test Keyword
Test Run KW
Run Keyword Test Keyword
Test Run KW and expect error
Run Keyword and Expect Error you told me to fail Test Keyword ${TRUE}
```
testlib.py:
```
import inspect
from robot.api import logger
def _get_current_keyword_id():
"""
Traverses the stack to find the current keyword and returns its id.
"""
for stack in inspect.stack():
frame = inspect.getargvalues(stack[0])
if getattr(frame, 'locals', None) and frame.locals.get('step'):
return frame.locals['step'].id
def test_keyword(fail=False):
logger.info(_get_current_keyword_id())
assert not fail, 'you told me to fail'
```
execution with 6.1a1:
```
root@36c9fa153edc:workspace# robot -L TRACE keyword-id.robot
==============================================================================
Keyword-Id
==============================================================================
Test KW | PASS |
------------------------------------------------------------------------------
Test Run KW | FAIL |
ValueError: robot.running.Keyword(name='Test Keyword', args=[], assign=()) is not in list
------------------------------------------------------------------------------
Test Run KW and expect error | FAIL |
Expected error 'you told me to fail' but got 'ValueError: robot.running.Keyword(name='Test Keyword', args=['${TRUE}'], assign=()) is not in list'.
------------------------------------------------------------------------------
Keyword-Id | FAIL |
3 tests, 1 passed, 2 failed
==============================================================================
```
stack trace shown in log:
```
Traceback (most recent call last):
File "/home/cisco/cxta/workspace/testlib.py", line 16, in test_keyword
logger.info(_get_current_keyword_id())
File "/home/cisco/cxta/workspace/testlib.py", line 12, in _get_current_keyword_id
return frame.locals['step'].id
File "/venv/lib/python3.9/site-packages/robot/model/body.py", line 58, in id
return self._get_id(self.parent)
File "/venv/lib/python3.9/site-packages/robot/model/body.py", line 69, in _get_id
my_id = steps.index(self) + 1
ValueError: robot.running.Keyword(name='Test Keyword', args=[], assign=()) is not in list
```
execution with 6.0.1 passes. | 0easy
|
Title: AsyncResult with exception accumulates traceback every time it is raised
Body: * gevent version: 22.10.2 from PyPI.
* Python version: Python 3.11.2
* Operating System: Fedora 6.2.9-200.fc37.x86_64
### Description:
Calling get on AsyncResult from multiple greenlets leads to a different traceback in every greenlet. And it can be observed that the tracebacks are getting accumulated despite there being now new tracebacks at all
```python-traceback
ERROR:root:something happened
Traceback (most recent call last):
File "/home/krishna/test_gevent", line 11, in <module>
result.get()
File "src/gevent/event.py", line 329, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 347, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 327, in gevent._gevent_cevent.AsyncResult._raise_exception
File "/home/krishna/.local/lib/python3.11/site-packages/gevent/_compat.py", line 67, in reraise
raise value
Exception
ERROR:root:something happened
Traceback (most recent call last):
File "/home/krishna/test_gevent", line 11, in <module>
result.get()
File "src/gevent/event.py", line 329, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 347, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 327, in gevent._gevent_cevent.AsyncResult._raise_exception
File "/home/krishna/.local/lib/python3.11/site-packages/gevent/_compat.py", line 67, in reraise
raise value
File "/home/krishna/test_gevent", line 11, in <module>
result.get()
File "src/gevent/event.py", line 329, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 347, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 327, in gevent._gevent_cevent.AsyncResult._raise_exception
File "/home/krishna/.local/lib/python3.11/site-packages/gevent/_compat.py", line 67, in reraise
raise value
Exception
ERROR:root:something happened
Traceback (most recent call last):
File "/home/krishna/test_gevent", line 11, in <module>
result.get()
File "src/gevent/event.py", line 329, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 347, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 327, in gevent._gevent_cevent.AsyncResult._raise_exception
File "/home/krishna/.local/lib/python3.11/site-packages/gevent/_compat.py", line 67, in reraise
raise value
File "/home/krishna/test_gevent", line 11, in <module>
result.get()
File "src/gevent/event.py", line 329, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 347, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 327, in gevent._gevent_cevent.AsyncResult._raise_exception
File "/home/krishna/.local/lib/python3.11/site-packages/gevent/_compat.py", line 67, in reraise
raise value
File "/home/krishna/test_gevent", line 11, in <module>
result.get()
File "src/gevent/event.py", line 329, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 347, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 327, in gevent._gevent_cevent.AsyncResult._raise_exception
File "/home/krishna/.local/lib/python3.11/site-packages/gevent/_compat.py", line 67, in reraise
raise value
Exception
ERROR:root:something happened
Traceback (most recent call last):
File "/home/krishna/test_gevent", line 11, in <module>
result.get()
File "src/gevent/event.py", line 329, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 347, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 327, in gevent._gevent_cevent.AsyncResult._raise_exception
File "/home/krishna/.local/lib/python3.11/site-packages/gevent/_compat.py", line 67, in reraise
raise value
File "/home/krishna/test_gevent", line 11, in <module>
result.get()
File "src/gevent/event.py", line 329, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 347, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 327, in gevent._gevent_cevent.AsyncResult._raise_exception
File "/home/krishna/.local/lib/python3.11/site-packages/gevent/_compat.py", line 67, in reraise
raise value
File "/home/krishna/test_gevent", line 11, in <module>
result.get()
File "src/gevent/event.py", line 329, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 347, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 327, in gevent._gevent_cevent.AsyncResult._raise_exception
File "/home/krishna/.local/lib/python3.11/site-packages/gevent/_compat.py", line 67, in reraise
raise value
File "/home/krishna/test_gevent", line 11, in <module>
result.get()
File "src/gevent/event.py", line 329, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 347, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 327, in gevent._gevent_cevent.AsyncResult._raise_exception
File "/home/krishna/.local/lib/python3.11/site-packages/gevent/_compat.py", line 67, in reraise
raise value
Exception
ERROR:root:something happened
Traceback (most recent call last):
File "/home/krishna/test_gevent", line 11, in <module>
result.get()
File "src/gevent/event.py", line 329, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 347, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 327, in gevent._gevent_cevent.AsyncResult._raise_exception
File "/home/krishna/.local/lib/python3.11/site-packages/gevent/_compat.py", line 67, in reraise
raise value
File "/home/krishna/test_gevent", line 11, in <module>
result.get()
File "src/gevent/event.py", line 329, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 347, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 327, in gevent._gevent_cevent.AsyncResult._raise_exception
File "/home/krishna/.local/lib/python3.11/site-packages/gevent/_compat.py", line 67, in reraise
raise value
File "/home/krishna/test_gevent", line 11, in <module>
result.get()
File "src/gevent/event.py", line 329, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 347, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 327, in gevent._gevent_cevent.AsyncResult._raise_exception
File "/home/krishna/.local/lib/python3.11/site-packages/gevent/_compat.py", line 67, in reraise
raise value
File "/home/krishna/test_gevent", line 11, in <module>
result.get()
File "src/gevent/event.py", line 329, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 347, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 327, in gevent._gevent_cevent.AsyncResult._raise_exception
File "/home/krishna/.local/lib/python3.11/site-packages/gevent/_compat.py", line 67, in reraise
raise value
File "/home/krishna/test_gevent", line 11, in <module>
result.get()
File "src/gevent/event.py", line 329, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 347, in gevent._gevent_cevent.AsyncResult.get
File "src/gevent/event.py", line 327, in gevent._gevent_cevent.AsyncResult._raise_exception
File "/home/krishna/.local/lib/python3.11/site-packages/gevent/_compat.py", line 67, in reraise
raise value
Exception
```
### What I've run:
Wrote a simple script to simulate a single AsyncResult with exception getting raised by several greenlets using a for loop for testing purposes
```python
import logging
import gevent.event
result = gevent.event.AsyncResult()
result.set_exception(Exception())
for _ in range(5):
try:
result.get()
except:
logging.exception("something happened")
```
| 0easy
|
Title: [UX] Sky silently treat task YAML as command when there is an extra space
Body: ## Repro
```
$ sky launch task.yaml -c test --env A= c
Command to run: task.yaml c
$ sky launch task.yaml -c test --env A=c
YAML to run: task.yaml
```
`sky launch` collects all args without an option key as entrypoint, which is counter-intuitive and hides typo error like above | 0easy
|
Title: tox 4 ignores TOX_SKIP_ENV
Body: ## Issue
My `make release` for [gitlab-trace](https://github.com/mgedmin/gitlab-trace) runs `TOX_SKIP_ENV=check-manifest tox -p auto`. This worked with tox 3.x. It now fails in tox 4, because it doesn't skip the check-manifest toxenv.
Since the documentation page describing the TOX_SKIP_ENV has 'latest' in the URL (https://tox.wiki/en/latest/config.html#envlist), I think it should still be supported?
## Environment
Provide at least:
- OS: Ubuntu 22.10
- `pip list` of the host Python where `tox` is installed:
```console
$ pipx runpip tox list
Package Version
------------- -------
cachetools 5.2.0
chardet 5.1.0
colorama 0.4.6
distlib 0.3.6
filelock 3.8.2
packaging 22.0
pip 22.3.1
platformdirs 2.6.0
pluggy 1.0.0
py 1.11.0
pyparsing 3.0.9
pyproject_api 1.2.1
setuptools 65.6.3
six 1.16.0
toml 0.10.2
tomli 2.0.1
tox 4.0.2
virtualenv 20.17.1
wheel 0.38.4
```
<!-- do I need these?
## Output of running tox
Provide the output of `tox -rvv`:
```console
```
## Minimal example
If possible, provide a minimal reproducer for the issue:
```console
```
--> | 0easy
|
Title: use IFrame to display embedded D3 plot
Body: when generating a pipeline plot with the D3 backend with `output='embed'`:
https://github.com/ploomber/ploomber/blob/77d7dbf6b143ee17f886f76b6f3163d0ad5a39c9/src/ploomber/dag/dag.py#L893
we use the requests-html library to load the HTML file since we need to execute the embedded JS for the plot to display:
https://github.com/ploomber/ploomber/blob/77d7dbf6b143ee17f886f76b6f3163d0ad5a39c9/src/ploomber/dag/plot.py#L66
and then we render the plot with a `IPython.display.HTML` object, this allows us to embed the plot in Jupyter. However, as pointed in https://github.com/ploomber/projects/issues/42 IPython has an `IFrame` object that correctly displays HTML and runs JS.
We should remove the requests-html code and substitute it for using `IFrame`
| 0easy
|
Title: Special key to ignore all migrations inside an app.
Body: It would be useful to have special key for ignoring all migrations inside an app.
Consider case when we have some third party apps and want to ignore all migrations or for all third party apps at all.
Right now we should do something like this:
```python
DTM_IGNORED_MIGRATIONS = [
('taggit', '0002_auto_20150616_2121'),
('two_factor', '0002_auto_20150110_0810'),
('two_factor', '0003_auto_20150817_1733'),
('two_factor', '0004_auto_20160205_1827'),
('two_factor', '0005_auto_20160224_0450'),
('waffle', '0002_auto_20161201_0958'),
]
```
With the special key, for example, it'll get much better:
```python
THIRD_PARTY_APPS = (
'taggit',
'two_factor',
'waffle',
)
# Apps specific for this project go here.
LOCAL_APPS = (
'app_1',
'app_2',
)
INSTALLED_APPS = THIRD_PARTY_APPS + LOCAL_APPS
DTM_IGNORED_MIGRATIONS = [
(app, '*') for app in THIRD_PARTY_APPS
]
```
| 0easy
|
Title: Suggest using a remote debugger for debugging in documentation
Body: It would be helpful if the documentation suggested to use a remote debugger (like [remote-pdb](https://github.com/ionelmc/python-remote-pdb)) to circumvent the issue that stdin/stdout can't forwarded.
For example, there could be a short paragraph in "Known limitations" that states that while the absence of forwarding of stdin/stdout means that many debuggers don't work, this approach does.
This would have saved me a couple of hours of trying to get around that limitation just now. | 0easy
|
Title: feat: support `msgspec` as another schema definition
Body: ### Describe the feature
So far, we're using `pydantic` since:
1. validation
2. generate OpenAPI JSON schema
Recently, another library called `msgspec` has the same features with a great performance. (pydantic also rewrites the core part with Rust)
We should consider adding `msgspec` as another schema definition.
Refer to https://jcristharif.com/msgspec/jsonschema.html
### Additional context
_No response_ | 0easy
|
Title: Resolving special variables can fail with confusing message
Body: Considering such example:
```
*** Variables ***
${TABLE} example
*** Test Cases ***
Special Variables
IF all(item not in $TABLE for item in ("one", "two"))
Log ${TABLE}
ELSE
Log not ${TABLE}
END
```
I'm not sure if ``all(item not in $TABLE for item in ("one", "two"))`` is supported at all but the error I'm getting looks incorrect - it contains 'internal' naming used in RF (with prefix ``RF_VAR_``):
```
NameError: name 'RF_VAR_TABLE' is not defined
``` | 0easy
|
Title: Change source distribution format from deprecated `zip` to `tar.gz`
Body: Python source distribution format has been standardized to `tag.gz` by [PEP 625](https://peps.python.org/pep-0625/). We currently use `zip ` and PyPI has recently started sending deprecation warnings about that. Best to change the format before its support is removed altogether. In addition to change package itself, we need to make sure installation instructions are updated.
This change doesn't affect normal users because `pip install robotframework` uses the wheel distribution and even when installing a downloaded source distribution, pip handles both formats fine. The change affects only users who download the source distribution and extract it, but that kind of usage is likely pretty rare. Those who actually want to see the source, typically clone the repository instead. Anyway, the change is somewhat backwards incompatible and needs to be mentioned in the release notes. | 0easy
|
Title: [FEATURE]: Git Co-Author - GitHub actions
Body: ### Feature summary
Problem to fix when commits are sqashed.
### Feature description
this is a common problem - when commits are squashed during PR merges - credit for original contributors gets lot.
git co-author is for that
https://www.dormant.ninja/git-co-author-script/
draft co author - github CI
```yml
name: Add Co-Authors
on:
pull_request:
types: [closed]
jobs:
add-coauthors:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Add co-authors
run: |
# Extract contributors and append as co-authors (refer script)
```
```sh
# Define the base branch (e.g., main or master)
BASE_BRANCH=main
# Find the merge base (divergence point) between the current branch and the base branch
MERGE_BASE=$(git merge-base HEAD $BASE_BRANCH)
# Extract contributors from the commits starting at the merge base up to HEAD
CONTRIBUTORS=$(git log $MERGE_BASE..HEAD --format='%aN <%aE>' | sort | uniq)
# Output or append contributors as Co-authored-by lines
for CONTRIBUTOR in $CONTRIBUTORS; do
echo "Co-authored-by: $CONTRIBUTOR"
done
```
### Motivation
_No response_
### Alternatives considered
_No response_
### Additional context
_No response_ | 0easy
|
Title: [ENH] input to load_from_tsfile
Body: ### Describe the feature or idea you want to propose
the function ``load_from_tsfile`` assumes that the extension is not given and (not unreasonably, given the name) appends it as ".ts"
```python
from aeon.datasets import load_from_tsfile
full = root+classification[0]
X,y = load_from_tsfile(full_file_path_and_name=full)
```
this came up when someone gave me a file with .txt extension (but in valid data format).
### Describe your proposed solution
Might be nice to allow an extension and only append .ts if none is given? Currently it will try load .ts.ts if you pass full_file_path_and_name with .ts on the end. The code is a bit convoluted due to paths, but should be a simple change, just check if there is a file name extension, if not, append .ts, if there is, try with what is given
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
_No response_ | 0easy
|
Title: Update Response:add_link() to support setting the crossorigin attribute
Body: | 0easy
|
Title: missing `type` from `link`
Body: | 0easy
|
Title: DS/DNSKEY Info Dialog
Body: To improve UI, could look somewhat like this:

| 0easy
|
Title: Admin UI does not show message for deleted objects
Body: **Describe the bug**
If you delete an object via the interface, there is no message to indicate if objects were deleted.

**To Reproduce**
- Login to Admin interface
- Import books.csv (tests/core/exports/)
- Import books-for-delete.csv
- You will not see any message for delete
**Versions (please complete the following information):**
- Django Import Export: 3.3.1
- Python 3.11
- Django 4.2
**Expected behavior**
A message indicating that delete was successful
| 0easy
|
Title: [Feature] Prefill assistant response
Body: ### Checklist
- [x] 1. If the issue you raised is not a feature but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 2. Please use English, otherwise it will be closed.
### Motivation
OAI API doesn't natively support prefilling an assistants response. vLLM and Aphrodite has the additional support for `continue_final_message` which would be need to have for SGLang to give developers even much more control.
Should be relatively easy for someone to implement. It's simply not allowing chat template EOS to take over in a turn where assistant response is last and this flag is enabled and a generation is requested. This was originally implemented with exact same parameter name in transformers, which became a feature in vLLM and Aphrodite.
### Related resources
https://huggingface.co/docs/transformers/main/en/chat_templating
https://github.com/aphrodite-engine/aphrodite-engine/blob/e64075b8937786311f6441fab5103f9ebf4e1dd8/aphrodite/endpoints/openai/protocol.py#L225-L233
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html#id7
Not seeing any extra parameter support
https://docs.sglang.ai/backend/openai_api_completions.html
| 0easy
|
Title: [Feature]: Audit and Update Examples To Use `VLLM_USE_V1=1`
Body: ### 🚀 The feature, motivation and pitch
Many of the examples leverage V0 internals.
We should:
- raise `NotImplementedError` if `envs.VLLM_USE_V1` with these
- convert them to use `V1` if we can
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | 0easy
|
Title: [Bug]: vLLM ModelConfig doesn't pass hf_overrides to get_hf_image_processor_config, which could contain auth token for hugging face (not in ENV)
Body: ### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
```text
INFO 03-14 23:03:57 __init__.py:183] Automatically detected platform cuda.
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Linux Mint 21.3 (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:31:09) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.144.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X3D 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU max MHz: 5858.0000
CPU min MHz: 545.0000
BogoMIPS: 8399.98
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 128 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpydoc==1.7.0
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-ml-py==12.570.86
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnxruntime==1.20.1
[pip3] pyzmq==26.2.0
[pip3] rapidocr-onnxruntime==1.3.24
[pip3] sentence-transformers==3.3.1
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[pip3] transformers==4.46.3
[pip3] triton==3.1.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] numpydoc 1.7.0 py312h06a4308_0
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-ml-py 12.570.86 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pyzmq 26.2.1 pypi_0 pypi
[conda] sentence-transformers 3.3.1 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] transformers 4.46.3 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.7.0
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X 0-31 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
LD_LIBRARY_PATH=/home/mukmckenzie/anaconda3/envs/llm/lib/python3.12/site-packages/cv2/../../lib64:/usr/lib/x86_64-linux-gnu:
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
```
</details>
### 🐛 Describe the bug
vLLM ModelConfig in config.py doesn't pass hf_overrides to get_hf_image_processor_config.
```python
if hf_overrides_kw:
logger.info("Overriding HF config with %s", hf_overrides_kw)
hf_config.update(hf_overrides_kw)
if hf_overrides_fn:
logger.info("Overriding HF config with %s", hf_overrides_fn)
hf_config = hf_overrides_fn(hf_config)
self.hf_config = hf_config
self.hf_text_config = get_hf_text_config(self.hf_config)
self.encoder_config = self._get_encoder_config()
self.hf_image_processor_config = get_hf_image_processor_config(
self.model, revision)
####################
def get_hf_image_processor_config(
model: Union[str, Path],
revision: Optional[str] = None,
**kwargs,
) -> Dict[str, Any]:
# ModelScope does not provide an interface for image_processor
if VLLM_USE_MODELSCOPE:
return dict()
# Separate model folder from file path for GGUF models
if check_gguf_file(model):
model = Path(model).parent
return get_image_processor_config(model, revision=revision, **kwargs)
####################
#transformers/models/auto/image_processing_auto.py
def get_image_processor_config(
pretrained_model_name_or_path: Union[str, os.PathLike],
cache_dir: Optional[Union[str, os.PathLike]] = None,
force_download: bool = False,
resume_download: Optional[bool] = None,
proxies: Optional[Dict[str, str]] = None,
token: Optional[Union[bool, str]] = None,
revision: Optional[str] = None,
local_files_only: bool = False,
**kwargs,
):
use_auth_token = kwargs.pop("use_auth_token", None)
if use_auth_token is not None:
warnings.warn(
"The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.",
FutureWarning,
)
if token is not None:
raise ValueError("`token` and `use_auth_token` are both specified. Please set only the argument `token`.")
token = use_auth_token
......
```
This hf_overrides might contain the auth token from huggingface. From unsloth, the only way to pass the auth_token to vllm (not in env), is through hf_overrides. When pulling custom models from hugging face through vllm, we face this error:
```
RuntimeError: void-mckenzie/krikri-sft_compound_instruct is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`
```
This limits the usage of multiple HF Tokens with vLLM. As you can see from the above snippet, get_image_processor_config in transformers accepts token. We just need to pass it when calling get_hf_image_processor_config from vllm/config.py
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.