text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Layoutlmv3 for RE
Body: **Describe**
when i use layoutlmv3 to do RE task on XFUND_zh dataset, the result is 'eval_precision': 0.5283, 'eval_recall': 0.4392.
i do not konw the reason of the bad result. maybe there is something wrong with my RE task code? maybe i need more data for training? is there some suggestions for me to improve the result?
Dose anyone meet the same problem?
| 1medium
|
Title: [Bug]: GPU memory leak in TextPairRegressor when embed_separately is set to `False`
Body: ### Describe the bug
When training a `TextPairRegressor` model with `embed_separately=False` (the default), via e.g. `ModelTrainer.fine_tune`, the GPU memory slowly creeps up with each batch, eventually causing an OOM even when the model and a single batch fits easily in GPU memory.
The function `store_embeddings` is supposed to clear any embeddings of each DataPoint. For this model, the type of data point is `TextPair`. It actually does seem to handle clearing `text_pair.first` and `.second` when `embed_separately=True`, because it runs embed for each sentence (see `TextPairRegressor._get_embedding_for_data_point`), and that embedding is attached to each sentence so it can be referenced via the sentence.
However, the default setting is `False`; in that case, to embed the pair, it concatenates the text of both sentences (adding a separator), creates a new sentence, embeds that sentence, and then returns that embedding. Since it's never attached to the `DataPoint` object, `clear_embeddings` doesn't find it when you iterate over the data points. The function `identify_dynamic_embeddings` also always comes up empty
### To Reproduce
```python
import flair
from flair.data import DataPairCorpus
from flair.models import TextPairRegressor
search_rel_corpus = DataPairCorpus(Path('text_pair_dataset'), train_file='train.tsv', test_file='test.tsv', dev_file='dev.tsv', label_type='relevance', in_memory=False)
text_pair_regressor = TextPairRegressor(embeddings=embeddings, label_type='relevance')
embeddings = TransformerDocumentEmbeddings(
model='xlm-roberta-base',
layers="-1",
subtoken_pooling='first',
fine_tune=True,
use_context=True,
is_word_embedding=True,
)
trainer = ModelTrainer(text_pair_regressor, search_rel_corpus)
trainer.fine_tune(
"relevance_regressor",
learning_rate=1e-5,
epoch=0,
max_epochs=5,
mini_batch_size=4,
save_optimizer_state=True,
save_model_each_k_epochs=1,
use_amp=True, # aka Automatic Mixed Precision, e.g. float16
)
```
### Expected behavior
The memory should remain relatively flat with each epoch of training if memory is cleared correctly. In other training, such as for a `TextClassifier`, it stays roughly the same after each mini-batch,
### Logs and Stack traces
```stacktrace
OutOfMemoryError Traceback (most recent call last)
Cell In[15], line 1
----> 1 final_score = trainer.fine_tune(
2 "relevance_regressor",
3 learning_rate=1e-5,
4 epoch=0,
5 max_epochs=5,
6 mini_batch_size=4,
7 save_optimizer_state=True,
8 save_model_each_k_epochs=1,
9 use_amp=True, # aka Automatic Mixed Precision, e.g. float16
10 )
11 final_score
File /pyzr/active_venv/lib/python3.10/site-packages/flair/trainers/trainer.py:253, in ModelTrainer.fine_tune(self, base_path, warmup_fraction, learning_rate, decoder_learning_rate, mini_batch_size, eval_batch_size, mini_batch_chunk_size, max_epochs, optimizer, train_with_dev, train_with_test, reduce_transformer_vocab, main_evaluation_metric, monitor_test, monitor_train_sample, use_final_model_for_eval, gold_label_dictionary_for_eval, exclude_labels, sampler, shuffle, shuffle_first_epoch, embeddings_storage_mode, epoch, save_final_model, save_optimizer_state, save_model_each_k_epochs, create_file_logs, create_loss_file, write_weights, use_amp, plugins, attach_default_scheduler, **kwargs)
250 if attach_default_scheduler:
251 plugins.append(LinearSchedulerPlugin(warmup_fraction=warmup_fraction))
--> 253 return self.train_custom(
254 base_path=base_path,
255 # training parameters
256 learning_rate=learning_rate,
257 decoder_learning_rate=decoder_learning_rate,
258 mini_batch_size=mini_batch_size,
259 eval_batch_size=eval_batch_size,
260 mini_batch_chunk_size=mini_batch_chunk_size,
261 max_epochs=max_epochs,
262 optimizer=optimizer,
263 train_with_dev=train_with_dev,
264 train_with_test=train_with_test,
265 reduce_transformer_vocab=reduce_transformer_vocab,
266 # evaluation and monitoring
267 main_evaluation_metric=main_evaluation_metric,
268 monitor_test=monitor_test,
269 monitor_train_sample=monitor_train_sample,
270 use_final_model_for_eval=use_final_model_for_eval,
271 gold_label_dictionary_for_eval=gold_label_dictionary_for_eval,
272 exclude_labels=exclude_labels,
273 # sampling and shuffling
274 sampler=sampler,
275 shuffle=shuffle,
276 shuffle_first_epoch=shuffle_first_epoch,
277 # evaluation and monitoring
278 embeddings_storage_mode=embeddings_storage_mode,
279 epoch=epoch,
280 # when and what to save
281 save_final_model=save_final_model,
282 save_optimizer_state=save_optimizer_state,
283 save_model_each_k_epochs=save_model_each_k_epochs,
284 # logging parameters
285 create_file_logs=create_file_logs,
286 create_loss_file=create_loss_file,
287 write_weights=write_weights,
288 # amp
289 use_amp=use_amp,
290 # plugins
291 plugins=plugins,
292 **kwargs,
293 )
File /pyzr/active_venv/lib/python3.10/site-packages/flair/trainers/trainer.py:624, in ModelTrainer.train_custom(self, base_path, learning_rate, decoder_learning_rate, mini_batch_size, eval_batch_size, mini_batch_chunk_size, max_epochs, optimizer, train_with_dev, train_with_test, max_grad_norm, reduce_transformer_vocab, main_evaluation_metric, monitor_test, monitor_train_sample, use_final_model_for_eval, gold_label_dictionary_for_eval, exclude_labels, sampler, shuffle, shuffle_first_epoch, embeddings_storage_mode, epoch, save_final_model, save_optimizer_state, save_model_each_k_epochs, create_file_logs, create_loss_file, write_weights, use_amp, plugins, **kwargs)
622 gradient_norm = None
623 scale_before = scaler.get_scale()
--> 624 scaler.step(self.optimizer)
625 scaler.update()
626 scale_after = scaler.get_scale()
File /pyzr/active_venv/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py:370, in GradScaler.step(self, optimizer, *args, **kwargs)
366 self.unscale_(optimizer)
368 assert len(optimizer_state["found_inf_per_device"]) > 0, "No inf checks were recorded for this optimizer."
--> 370 retval = self._maybe_opt_step(optimizer, optimizer_state, *args, **kwargs)
372 optimizer_state["stage"] = OptState.STEPPED
374 return retval
File /pyzr/active_venv/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py:290, in GradScaler._maybe_opt_step(self, optimizer, optimizer_state, *args, **kwargs)
288 retval = None
289 if not sum(v.item() for v in optimizer_state["found_inf_per_device"].values()):
--> 290 retval = optimizer.step(*args, **kwargs)
291 return retval
File /pyzr/active_venv/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:69, in LRScheduler.__init__.<locals>.with_counter.<locals>.wrapper(*args, **kwargs)
67 instance._step_count += 1
68 wrapped = func.__get__(instance, cls)
---> 69 return wrapped(*args, **kwargs)
File /pyzr/active_venv/lib/python3.10/site-packages/torch/optim/optimizer.py:280, in Optimizer.profile_hook_step.<locals>.wrapper(*args, **kwargs)
276 else:
277 raise RuntimeError(f"{func} must return None or a tuple of (new_args, new_kwargs),"
278 f"but got {result}.")
--> 280 out = func(*args, **kwargs)
281 self._optimizer_step_code()
283 # call optimizer step post hooks
File /pyzr/active_venv/lib/python3.10/site-packages/torch/optim/optimizer.py:33, in _use_grad_for_differentiable.<locals>._use_grad(self, *args, **kwargs)
31 try:
32 torch.set_grad_enabled(self.defaults['differentiable'])
---> 33 ret = func(self, *args, **kwargs)
34 finally:
35 torch.set_grad_enabled(prev_grad)
File /pyzr/active_venv/lib/python3.10/site-packages/torch/optim/adamw.py:171, in AdamW.step(self, closure)
158 beta1, beta2 = group["betas"]
160 self._init_group(
161 group,
162 params_with_grad,
(...)
168 state_steps,
169 )
--> 171 adamw(
172 params_with_grad,
173 grads,
174 exp_avgs,
175 exp_avg_sqs,
176 max_exp_avg_sqs,
177 state_steps,
178 amsgrad=amsgrad,
179 beta1=beta1,
180 beta2=beta2,
181 lr=group["lr"],
182 weight_decay=group["weight_decay"],
183 eps=group["eps"],
184 maximize=group["maximize"],
185 foreach=group["foreach"],
186 capturable=group["capturable"],
187 differentiable=group["differentiable"],
188 fused=group["fused"],
189 grad_scale=getattr(self, "grad_scale", None),
190 found_inf=getattr(self, "found_inf", None),
191 )
193 return loss
File /pyzr/active_venv/lib/python3.10/site-packages/torch/optim/adamw.py:321, in adamw(params, grads, exp_avgs, exp_avg_sqs, max_exp_avg_sqs, state_steps, foreach, capturable, differentiable, fused, grad_scale, found_inf, amsgrad, beta1, beta2, lr, weight_decay, eps, maximize)
318 else:
319 func = _single_tensor_adamw
--> 321 func(
322 params,
323 grads,
324 exp_avgs,
325 exp_avg_sqs,
326 max_exp_avg_sqs,
327 state_steps,
328 amsgrad=amsgrad,
329 beta1=beta1,
330 beta2=beta2,
331 lr=lr,
332 weight_decay=weight_decay,
333 eps=eps,
334 maximize=maximize,
335 capturable=capturable,
336 differentiable=differentiable,
337 grad_scale=grad_scale,
338 found_inf=found_inf,
339 )
File /pyzr/active_venv/lib/python3.10/site-packages/torch/optim/adamw.py:566, in _multi_tensor_adamw(params, grads, exp_avgs, exp_avg_sqs, max_exp_avg_sqs, state_steps, grad_scale, found_inf, amsgrad, beta1, beta2, lr, weight_decay, eps, maximize, capturable, differentiable)
564 exp_avg_sq_sqrt = torch._foreach_sqrt(device_exp_avg_sqs)
565 torch._foreach_div_(exp_avg_sq_sqrt, bias_correction2_sqrt)
--> 566 denom = torch._foreach_add(exp_avg_sq_sqrt, eps)
568 torch._foreach_addcdiv_(device_params, device_exp_avgs, denom, step_size)
OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 15.78 GiB total capacity; 14.06 GiB already allocated; 12.00 MiB free; 14.90 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
### Screenshots
_No response_
### Additional Context
I printed out the GPU usage in an altered `train_custom`:
```
def print_gpu_usage(entry=None):
allocated_memory = torch.cuda.memory_allocated(0)
reserved_memory = torch.cuda.memory_reserved(0)
print(f"{entry}\t{allocated_memory:<15,} / {reserved_memory:<15,}")
```
I saw that when training a `TextClassifier`, the memory usage goes back down to the value at the beginning of a batch after `store_embeddings` is called. In `TextPairRegressor`, the memory does not go down at all after `store_embeddings` is called.
### Environment
#### Versions:
##### Flair
0.13.1
##### Pytorch
2.3.1+cu121
##### Transformers
4.31.0
#### GPU
True | 2hard
|
Title: How do I programmatically access the sample requests from the generated swagger UI
Body: **Ask a question**
For a given restx application, I can see a rich set of details contained in the generated Swagger UI, for example for each endpoint, I can see sample requests populated with default values from the restx `fields` I created to serve as the components when defining the endpoints. These show up as example `curl` commands that I can copy/paste into a shell (as well as being executed from the 'Try it out' button).
However, I want to access this data programmatically from the app client itself. Suppose I load and run the app in a standalone Python program and have a handle to the Flask `app` object. I can see attributes such as `api.application.blueprints['restx_doc']` to get a handle to the `Apidoc` object.
But I cannot find out where this object stores all the information I need to programmatically reconstruct valid requests to the service's endpoint.
| 1medium
|
Title: Request body is saved in a non human-readable format when it contains special characters
Body: Hello and thank you for creating this tool, it looks very promising! The overall experience has been good so far, but I did notice an issue that's a bit inconvenient.
I've created a `POST` request which contains letters with diacritics in the body, such as this one:
```json
{
"Hello": "There",
"Hi": "ฤau"
}
```
If I save the request into a yaml file, the body will be saved in a hard to read format:
```yaml
name: Test
method: POST
url: https://example.org/test
body:
content: "{\n \"Hello\": \"There\",\n \"Hi\": \"\u010Cau\"\n}"
```
If I replace the `ฤ` with a regular `C`, the resulting yaml file will have the format that I expect:
```yaml
name: Test
method: POST
url: https://example.org/test
body:
content: |-
{
"Hello": "There",
"Hi": "Cau"
}
```
Is it possible to fix this? The current behavior complicates manual editing and version control diffs, so I think it might be worth looking into.
I'm using `posting` `1.7.0`
Thanks! | 1medium
|
Title: Voila not displaying Canvas from IpyCanvas
Body: <!--
Welcome! Before creating a new issue please search for relevant issues and recreate the issue in a fresh environment.
-->
## Description
When executing the Jupyter Notebook, the canvas appears and works as intended, but when executing with Voila, its a blank canvas
<!--Describe the bug clearly and concisely. Include screenshots/gifs if possible-->


Empty...
| 1medium
|
Title: Error when loading multiple models - Tensor name not found
Body: In my code, I'm loading two DNN models. Model A is a normal DNN with fully-connected layers, and Model B is a Convolutional Neural Network similar to the one used in the MNIST example.
Individually, they both work just fine - they train properly, they save properly, they load properly, and predict properly. However, when loading both neural networks, tflearn crashes with an error that seems to indicate `"Tensor name 'example_name' not found in checkpoint files..."`
This error will be thrown for whatever model is loaded second (i.e. Model A will load and run correctly but Model B will not, and if the order is switched, then vice-versa). This happens even when the models are saved in and loaded from completely different directories. I'm guessing it's some sort of internal caching problem with the checkpoint files. Any solutions?
Here's some more of the stack trace, if it helps
```
File "/usr/local/lib/python2.7/site-packages/tflearn/models/dnn.py", line 227, in load
self.trainer.restore(model_file)
File "/usr/local/lib/python2.7/site-packages/tflearn/helpers/trainer.py", line 379, in restore
self.restorer.restore(self.session, model_file)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1105, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 372, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 636, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 708, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 728, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.NotFoundError: Tensor name "Accuracy/Mean/moving_avg_1" not found in checkpoint files classification_classifier.tfl
[[Node: save_5/restore_slice_1 = RestoreSlice[dt=DT_FLOAT, preferred_shard=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save_5/Const_0, save_5/restore_slice_1/tensor_name, save_5/restore_slice_1/shape_and_slice)]]
Caused by op u'save_5/restore_slice_1', defined at:
```
| 2hard
|
Title: Programmatically create tasks based on the product of the task executed in the previous pipeline step
Body: I would like to understand how to programmatically create tasks based on the product of the task executed in the previous pipeline step.
For example, `get_data` creates csv-file and I want to create tasks for each row of csv: `process_row_1`, `process_row_2`, .... Accordingly, I have code using the python api that reads a csv-file -- how do I indicate that this csv-file is the product of another task?
So how do I use the upstream idiom (`upstream['get_data']`)?
I formulate the question a brief, assuming that my case is quite typical. If this assumption of mine is incorrect, then I am ready to supplement the issue with code that more clearly illustrates my request.
| 1medium
|
Title: Bug: `Unsupported type: <class 'msgspec._core.StructMeta'>`
Body: ### Description
Visiting /schema when a route contains a request struct that utilizes a `msgspec` Struct via default factory is raising the error `Unsupported type: <class 'msgspec._core.StructMeta'>`.
Essentially, if I have a struct like this:
```
class Stuff(msgspec.Struct):
foo: list = msgspec.field(default=list)
```
And I use that struct as my request and then I visit `/schema`, I will get the error `Unsupported type: <class 'msgspec._core.StructMeta'>`.
### URL to code causing the issue
_No response_
### MCVE
```python
# Your MCVE code here
```
### Steps to reproduce
```bash
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
```
### Screenshots
```bash
""
```
### Logs
```bash
```
### Litestar Version
2.13.0 final
### Platform
- [x] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | 1medium
|
Title: Bokeh: BokehJS was loaded multiple times but one version failed to initialize.
Body: Hi team, thanks for your hard work. If possible, can we put a high priority on this fix? It's quite damaging to user experience.
#### ALL software version info
(this library, plus any other relevant software, e.g. bokeh, python, notebook, OS, browser, etc should be added within the dropdown below.)
<details>
<summary>Software Version Info</summary>
```plaintext
acryl-datahub==0.10.5.5
aiohappyeyeballs==2.4.0
aiohttp==3.10.5
aiosignal==1.3.1
alembic==1.13.2
ansi2html==1.9.2
anyio==4.4.0
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
arrow==1.3.0
asttokens==2.4.1
async-generator==1.10
async-lru==2.0.4
attrs==24.2.0
autograd==1.7.0
autograd-gamma==0.5.0
avro==1.10.2
avro-gen3==0.7.10
awscli==1.33.27
babel==2.16.0
backports.tarfile==1.2.0
beautifulsoup4==4.12.3
black==24.8.0
bleach==6.1.0
blinker==1.8.2
bokeh==3.4.2
bokehtools==0.46.2
boto3==1.34.76
botocore==1.34.145
bouncer-client==0.4.1
cached-property==1.5.2
certifi==2024.7.4
certipy==0.1.3
cffi==1.17.0
charset-normalizer==3.3.2
click==8.1.7
click-default-group==1.2.4
click-spinner==0.1.10
cloudpickle==3.0.0
colorama==0.4.6
colorcet==3.0.1
comm==0.2.2
contourpy==1.3.0
cryptography==43.0.0
cycler==0.12.1
dash==2.17.1
dash-core-components==2.0.0
dash-html-components==2.0.0
dash-table==5.0.0
dask==2024.8.1
datashader==0.16.3
datatank-client==2.1.10.post12049
dataworks-common==2.1.10.post12049
debugpy==1.8.5
decorator==5.1.1
defusedxml==0.7.1
Deprecated==1.2.14
directives-client==0.4.4
docker==7.1.0
docutils==0.16
entrypoints==0.4
executing==2.0.1
expandvars==0.12.0
fastjsonschema==2.20.0
Flask==3.0.3
fonttools==4.53.1
formulaic==1.0.2
fqdn==1.5.1
frozenlist==1.4.1
fsspec==2024.6.1
future==1.0.0
gitdb==4.0.11
GitPython==3.1.43
greenlet==3.0.3
h11==0.14.0
holoviews==1.19.0
httpcore==1.0.5
httpx==0.27.2
humanfriendly==10.0
hvplot==0.10.0
idna==3.8
ijson==3.3.0
importlib-metadata==4.13.0
interface-meta==1.3.0
ipykernel==6.29.5
ipython==8.18.0
ipython-genutils==0.2.0
ipywidgets==8.1.5
isoduration==20.11.0
isort==5.13.2
itsdangerous==2.2.0
jaraco.classes==3.4.0
jaraco.context==6.0.1
jaraco.functools==4.0.2
jedi==0.19.1
jeepney==0.8.0
Jinja2==3.1.4
jira==3.2.0
jmespath==1.0.1
json5==0.9.25
jsonpointer==3.0.0
jsonref==1.1.0
jsonschema==4.17.3
jsonschema-specifications==2023.12.1
jupyter==1.0.0
jupyter-console==6.6.3
jupyter-dash==0.4.2
jupyter-events==0.10.0
jupyter-lsp==2.2.5
jupyter-resource-usage==1.1.0
jupyter-server-mathjax==0.2.6
jupyter-telemetry==0.1.0
jupyter_bokeh==4.0.5
jupyter_client==8.6.2
jupyter_core==5.7.2
jupyter_server==2.14.2
jupyter_server_proxy==4.3.0
jupyter_server_terminals==0.5.3
jupyterhub==4.1.4
jupyterlab==4.2.5
jupyterlab-vim==4.1.3
jupyterlab_code_formatter==3.0.2
jupyterlab_git==0.50.1
jupyterlab_pygments==0.3.0
jupyterlab_server==2.27.3
jupyterlab_templates==0.5.2
jupyterlab_widgets==3.0.13
keyring==25.3.0
kiwisolver==1.4.5
lckr_jupyterlab_variableinspector==3.2.1
lifelines==0.29.0
linkify-it-py==2.0.3
llvmlite==0.43.0
locket==1.0.0
Mako==1.3.5
Markdown==3.3.7
markdown-it-py==3.0.0
MarkupSafe==2.1.5
matplotlib==3.9.2
matplotlib-inline==0.1.7
mdit-py-plugins==0.4.1
mdurl==0.1.2
mistune==3.0.2
mixpanel==4.10.1
more-itertools==10.4.0
multidict==6.0.5
multipledispatch==1.0.0
mypy-extensions==1.0.0
nbclassic==1.1.0
nbclient==0.10.0
nbconvert==7.16.4
nbdime==4.0.1
nbformat==5.10.4
nbgitpuller==1.2.1
nest-asyncio==1.6.0
notebook==7.2.2
notebook_shim==0.2.4
numba==0.60.0
numpy==1.26.4
oauthlib==3.2.2
overrides==7.7.0
packaging==24.1
pamela==1.2.0
pandas==2.1.4
pandocfilters==1.5.1
panel==1.4.4
param==2.1.1
parso==0.8.4
partd==1.4.2
pathspec==0.12.1
pexpect==4.9.0
pillow==10.4.0
platformdirs==4.2.2
plotly==5.23.0
progressbar2==4.5.0
prometheus_client==0.20.0
prompt-toolkit==3.0.38
psutil==5.9.8
psycopg2-binary==2.9.9
ptyprocess==0.7.0
pure_eval==0.2.3
pyarrow==15.0.2
pyasn1==0.6.0
pycparser==2.22
pyct==0.5.0
pydantic==1.10.18
Pygments==2.18.0
PyHive==0.7.0
PyJWT==2.9.0
pymssql==2.3.0
PyMySQL==1.1.1
pyodbc==5.1.0
pyOpenSSL==24.2.1
pyparsing==3.1.4
pyrsistent==0.20.0
pyspork==2.24.0
python-dateutil==2.9.0.post0
python-json-logger==2.0.7
python-utils==3.8.2
pytz==2024.1
pyviz_comms==3.0.3
PyYAML==6.0.1
pyzmq==26.2.0
qtconsole==5.5.2
QtPy==2.4.1
ratelimiter==1.2.0.post0
redis==3.5.3
referencing==0.35.1
requests==2.32.3
requests-file==2.1.0
requests-oauthlib==2.0.0
requests-toolbelt==1.0.0
retrying==1.3.4
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rpds-py==0.20.0
rsa==4.7.2
ruamel.yaml==0.17.40
ruamel.yaml.clib==0.2.8
ruff==0.6.2
s3transfer==0.10.2
scipy==1.13.0
SecretStorage==3.3.3
Send2Trash==1.8.3
sentry-sdk==2.13.0
simpervisor==1.0.0
six==1.16.0
smmap==5.0.1
sniffio==1.3.1
soupsieve==2.6
SQLAlchemy==1.4.52
sqlparse==0.4.4
stack-data==0.6.3
structlog==22.1.0
tabulate==0.9.0
tenacity==9.0.0
termcolor==2.4.0
terminado==0.18.1
tesladex-client==0.9.0
tinycss2==1.3.0
toml==0.10.2
toolz==0.12.1
tornado==6.4.1
tqdm==4.66.4
traitlets==5.14.3
types-python-dateutil==2.9.0.20240821
typing-inspect==0.9.0
typing_extensions==4.5.0
tzdata==2024.1
tzlocal==5.2
uc-micro-py==1.0.3
uri-template==1.3.0
urllib3==1.26.19
wcwidth==0.2.13
webcolors==24.8.0
webencodings==0.5.1
websocket-client==1.8.0
Werkzeug==3.0.4
widgetsnbextension==4.0.13
wrapt==1.16.0
xarray==2024.7.0
xyzservices==2024.6.0
yapf==0.32.0
yarl==1.9.4
zipp==3.20.1
```
</details>
#### Description of expected behavior and the observed behavior
I should be able to use panel in 2 notebooks simultaneously, but if I save my changes and reload the page, the error will show.
#### Complete, minimal, self-contained example code that reproduces the issue
Steps to reproduce:
1. create 2 notebooks with the following content
```python
# notebook 1
import panel as pn
pn.extension()
pn.Column('hi')
```
```python
# notebook 2 (open in another jupyterlab tab)
import panel as pn
pn.extension()
pn.Column('hi')
```
2. Run both notebooks
3. Save both notebooks
4. Reload your page
5. Try to run either of the notebooks and you'll see the error.
#### Stack traceback and/or browser JavaScript console output
(Ignore 'set_log_level` error. I think it's unrelated.)

| 1medium
|
Title: --base-image not recognise as valid argument
Body: Related with https://github.com/jupyterhub/repo2docker/issues/487
https://github.com/jupyterhub/repo2docker/blob/247e9535b167112cabf69eed59a6947e4af1ee34/repo2docker/app.py#L450 should make `--base-image` a valid argument for `repo2docker` but I'm getting
```
repo2docker: error: unrecognized arguments: --base-image
```
with
```
$ repo2docker --version
2023.06.0
``` | 1medium
|
Title: [Bug] pip install TTS failure: pip._vendor.resolvelib.resolvers.ResolutionTooDeep: 200000
Body: ### Describe the bug
Can't make pip installation
### To Reproduce
**1. Run the following command:** `pip install TTS`
```
C:>C:\Python38\scripts\pip install TTS
```
**2. Wait:**
```
Collecting TTS
Downloading TTS-0.14.3.tar.gz (1.5 MB)
---------------------------------------- 1.5/1.5 MB 1.7 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting cython==0.29.28 (from TTS)
Using cached Cython-0.29.28-py2.py3-none-any.whl (983 kB)
Requirement already satisfied: scipy>=1.4.0 in C:\python38\lib\site-packages (from TTS) (1.7.1)
Collecting torch>=1.7 (from TTS)
Downloading torch-2.0.1-cp38-cp38-win_amd64.whl (172.4 MB)
---------------------------------------- 172.4/172.4 MB ? eta 0:00:00
Collecting torchaudio (from TTS)
Downloading torchaudio-2.0.2-cp38-cp38-win_amd64.whl (2.1 MB)
---------------------------------------- 2.1/2.1 MB 3.6 MB/s eta 0:00:00
Collecting soundfile (from TTS)
Downloading soundfile-0.12.1-py2.py3-none-win_amd64.whl (1.0 MB)
---------------------------------------- 1.0/1.0 MB 5.8 MB/s eta 0:00:00
Collecting librosa==0.10.0.* (from TTS)
Downloading librosa-0.10.0.post2-py3-none-any.whl (253 kB)
---------------------------------------- 253.0/253.0 kB 15.2 MB/s eta 0:00:00
Collecting inflect==5.6.0 (from TTS)
Downloading inflect-5.6.0-py3-none-any.whl (33 kB)
Requirement already satisfied: tqdm in C:\python38\lib\site-packages (from TTS) (4.60.0)
Collecting anyascii (from TTS)
Downloading anyascii-0.3.2-py3-none-any.whl (289 kB)
---------------------------------------- 289.9/289.9 kB 9.0 MB/s eta 0:00:00
Requirement already satisfied: pyyaml in C:\python38\lib\site-packages (from TTS) (5.4.1)
Requirement already satisfied: fsspec>=2021.04.0 in C:\python38\lib\site-packages (from TTS) (2022.3.0)
Requirement already satisfied: aiohttp in C:\python38\lib\site-packages (from TTS) (3.7.3)
Requirement already satisfied: packaging in C:\python38\lib\site-packages (from TTS) (23.0)
Collecting flask (from TTS)
Downloading flask-2.3.3-py3-none-any.whl (96 kB)
---------------------------------------- 96.1/96.1 kB 5.4 MB/s eta 0:00:00
Collecting pysbd (from TTS)
Downloading pysbd-0.3.4-py3-none-any.whl (71 kB)
---------------------------------------- 71.1/71.1 kB 2.0 MB/s eta 0:00:00
Collecting umap-learn==0.5.1 (from TTS)
Downloading umap-learn-0.5.1.tar.gz (80 kB)
---------------------------------------- 80.9/80.9 kB 4.7 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Requirement already satisfied: pandas in C:\python38\lib\site-packages (from TTS) (1.5.1)
Requirement already satisfied: matplotlib in C:\python38\lib\site-packages (from TTS) (3.6.3)
Collecting trainer==0.0.20 (from TTS)
Downloading trainer-0.0.20-py3-none-any.whl (45 kB)
---------------------------------------- 45.2/45.2 kB 1.1 MB/s eta 0:00:00
Collecting coqpit>=0.0.16 (from TTS)
Downloading coqpit-0.0.17-py3-none-any.whl (13 kB)
Collecting jieba (from TTS)
Downloading jieba-0.42.1.tar.gz (19.2 MB)
---------------------------------------- 19.2/19.2 MB 2.0 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting pypinyin (from TTS)
Downloading pypinyin-0.49.0-py2.py3-none-any.whl (1.4 MB)
---------------------------------------- 1.4/1.4 MB 3.2 MB/s eta 0:00:00
Collecting mecab-python3==1.0.5 (from TTS)
Downloading mecab_python3-1.0.5-cp38-cp38-win_amd64.whl (500 kB)
---------------------------------------- 500.8/500.8 kB 6.3 MB/s eta 0:00:00
Collecting unidic-lite==1.0.8 (from TTS)
Downloading unidic-lite-1.0.8.tar.gz (47.4 MB)
---------------------------------------- 47.4/47.4 MB 1.8 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting gruut[de,es,fr]==2.2.3 (from TTS)
Downloading gruut-2.2.3.tar.gz (73 kB)
---------------------------------------- 73.5/73.5 kB 213.1 kB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting jamo (from TTS)
Downloading jamo-0.4.1-py3-none-any.whl (9.5 kB)
Collecting nltk (from TTS)
Downloading nltk-3.8.1-py3-none-any.whl (1.5 MB)
---------------------------------------- 1.5/1.5 MB 3.4 MB/s eta 0:00:00
Collecting g2pkk>=0.1.1 (from TTS)
Downloading g2pkk-0.1.2-py3-none-any.whl (25 kB)
Collecting bangla==0.0.2 (from TTS)
Downloading bangla-0.0.2-py2.py3-none-any.whl (6.2 kB)
Collecting bnnumerizer (from TTS)
Downloading bnnumerizer-0.0.2.tar.gz (4.7 kB)
Preparing metadata (setup.py) ... done
Collecting bnunicodenormalizer==0.1.1 (from TTS)
Downloading bnunicodenormalizer-0.1.1.tar.gz (38 kB)
Preparing metadata (setup.py) ... done
Collecting k-diffusion (from TTS)
Downloading k_diffusion-0.1.0-py3-none-any.whl (33 kB)
Collecting einops (from TTS)
Downloading einops-0.6.1-py3-none-any.whl (42 kB)
---------------------------------------- 42.2/42.2 kB 1.0 MB/s eta 0:00:00
Collecting transformers (from TTS)
Downloading transformers-4.33.3-py3-none-any.whl (7.6 MB)
---------------------------------------- 7.6/7.6 MB 3.1 MB/s eta 0:00:00
Collecting numpy==1.21.6 (from TTS)
Using cached numpy-1.21.6-cp38-cp38-win_amd64.whl (14.0 MB)
Collecting numba==0.55.1 (from TTS)
Downloading numba-0.55.1-cp38-cp38-win_amd64.whl (2.4 MB)
---------------------------------------- 2.4/2.4 MB 4.1 MB/s eta 0:00:00
Requirement already satisfied: Babel<3.0.0,>=2.8.0 in C:\python38\lib\site-packages (from gruut[de,es,fr]==2.2.3->TT
Collecting dateparser~=1.1.0 (from gruut[de,es,fr]==2.2.3->TTS)
Downloading dateparser-1.1.8-py2.py3-none-any.whl (293 kB)
---------------------------------------- 293.8/293.8 kB 4.6 MB/s eta 0:00:00
Collecting gruut-ipa<1.0,>=0.12.0 (from gruut[de,es,fr]==2.2.3->TTS)
Downloading gruut-ipa-0.13.0.tar.gz (101 kB)
---------------------------------------- 101.6/101.6 kB ? eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting gruut_lang_en~=2.0.0 (from gruut[de,es,fr]==2.2.3->TTS)
Downloading gruut_lang_en-2.0.0.tar.gz (15.2 MB)
---------------------------------------- 15.2/15.2 MB 3.5 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting jsonlines~=1.2.0 (from gruut[de,es,fr]==2.2.3->TTS)
Downloading jsonlines-1.2.0-py2.py3-none-any.whl (7.6 kB)
Requirement already satisfied: networkx<3.0.0,>=2.5.0 in C:\python38\lib\site-packages (from gruut[de,es,fr]==2.2.3-
Collecting num2words<1.0.0,>=0.5.10 (from gruut[de,es,fr]==2.2.3->TTS)
Downloading num2words-0.5.12-py3-none-any.whl (125 kB)
---------------------------------------- 125.2/125.2 kB 7.2 MB/s eta 0:00:00
Collecting python-crfsuite~=0.9.7 (from gruut[de,es,fr]==2.2.3->TTS)
Downloading python_crfsuite-0.9.9-cp38-cp38-win_amd64.whl (138 kB)
---------------------------------------- 138.9/138.9 kB 4.2 MB/s eta 0:00:00
Requirement already satisfied: importlib_resources in C:\python38\lib\site-packages (from gruut[de,es,fr]==2.2.3->TT
Collecting gruut_lang_es~=2.0.0 (from gruut[de,es,fr]==2.2.3->TTS)
Downloading gruut_lang_es-2.0.0.tar.gz (31.4 MB)
---------------------------------------- 31.4/31.4 MB 2.8 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting gruut_lang_fr~=2.0.0 (from gruut[de,es,fr]==2.2.3->TTS)
Downloading gruut_lang_fr-2.0.2.tar.gz (10.9 MB)
---------------------------------------- 10.9/10.9 MB 3.8 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting gruut_lang_de~=2.0.0 (from gruut[de,es,fr]==2.2.3->TTS)
Downloading gruut_lang_de-2.0.0.tar.gz (18.1 MB)
---------------------------------------- 18.1/18.1 MB 3.9 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting audioread>=2.1.9 (from librosa==0.10.0.*->TTS)
Downloading audioread-3.0.1-py3-none-any.whl (23 kB)
Requirement already satisfied: scikit-learn>=0.20.0 in C:\python38\lib\site-packages (from librosa==0.10.0.*->TTS) (
Requirement already satisfied: joblib>=0.14 in C:\python38\lib\site-packages (from librosa==0.10.0.*->TTS) (1.0.1)
Requirement already satisfied: decorator>=4.3.0 in C:\python38\lib\site-packages (from librosa==0.10.0.*->TTS) (4.4.
Collecting pooch<1.7,>=1.0 (from librosa==0.10.0.*->TTS)
Downloading pooch-1.6.0-py3-none-any.whl (56 kB)
---------------------------------------- 56.3/56.3 kB 51.7 kB/s eta 0:00:00
Collecting soxr>=0.3.2 (from librosa==0.10.0.*->TTS)
Downloading soxr-0.3.6-cp38-cp38-win_amd64.whl (185 kB)
---------------------------------------- 185.1/185.1 kB 431.8 kB/s eta 0:00:00
Requirement already satisfied: typing-extensions>=4.1.1 in C:\python38\lib\site-packages (from librosa==0.10.0.*->TT
Collecting lazy-loader>=0.1 (from librosa==0.10.0.*->TTS)
Downloading lazy_loader-0.3-py3-none-any.whl (9.1 kB)
Collecting msgpack>=1.0 (from librosa==0.10.0.*->TTS)
Downloading msgpack-1.0.7-cp38-cp38-win_amd64.whl (222 kB)
---------------------------------------- 222.8/222.8 kB 1.4 MB/s eta 0:00:00
Collecting llvmlite<0.39,>=0.38.0rc1 (from numba==0.55.1->TTS)
Downloading llvmlite-0.38.1-cp38-cp38-win_amd64.whl (23.2 MB)
---------------------------------------- 23.2/23.2 MB 917.7 kB/s eta 0:00:00
Requirement already satisfied: setuptools in C:\python38\lib\site-packages (from numba==0.55.1->TTS) (67.6.1)
Requirement already satisfied: psutil in C:\python38\lib\site-packages (from trainer==0.0.20->TTS) (5.8.0)
Collecting tensorboardX (from trainer==0.0.20->TTS)
Downloading tensorboardX-2.6.2.2-py2.py3-none-any.whl (101 kB)
---------------------------------------- 101.7/101.7 kB 1.9 MB/s eta 0:00:00
Requirement already satisfied: protobuf<3.20,>=3.9.2 in C:\python38\lib\site-packages (from trainer==0.0.20->TTS) (3
Collecting pynndescent>=0.5 (from umap-learn==0.5.1->TTS)
Downloading pynndescent-0.5.10.tar.gz (1.1 MB)
---------------------------------------- 1.1/1.1 MB 3.3 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Requirement already satisfied: cffi>=1.0 in C:\python38\lib\site-packages (from soundfile->TTS) (1.14.5)
Requirement already satisfied: filelock in C:\python38\lib\site-packages (from torch>=1.7->TTS) (3.0.12)
Requirement already satisfied: sympy in C:\python38\lib\site-packages (from torch>=1.7->TTS) (1.11.1)
Requirement already satisfied: jinja2 in C:\python38\lib\site-packages (from torch>=1.7->TTS) (3.0.1)
Requirement already satisfied: attrs>=17.3.0 in C:\python38\lib\site-packages (from aiohttp->TTS) (21.2.0)
Requirement already satisfied: chardet<4.0,>=2.0 in C:\python38\lib\site-packages (from aiohttp->TTS) (3.0.4)
Requirement already satisfied: multidict<7.0,>=4.5 in C:\python38\lib\site-packages (from aiohttp->TTS) (5.1.0)
Requirement already satisfied: async-timeout<4.0,>=3.0 in C:\python38\lib\site-packages (from aiohttp->TTS) (3.0.1)
Requirement already satisfied: yarl<2.0,>=1.0 in C:\python38\lib\site-packages (from aiohttp->TTS) (1.6.3)
Collecting Werkzeug>=2.3.7 (from flask->TTS)
Downloading werkzeug-2.3.7-py3-none-any.whl (242 kB)
---------------------------------------- 242.2/242.2 kB 1.5 MB/s eta 0:00:00
Collecting jinja2 (from torch>=1.7->TTS)
Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB)
---------------------------------------- 133.1/133.1 kB 7.7 MB/s eta 0:00:00
Requirement already satisfied: itsdangerous>=2.1.2 in C:\python38\lib\site-packages (from flask->TTS) (2.1.2)
Requirement already satisfied: click>=8.1.3 in C:\python38\lib\site-packages (from flask->TTS) (8.1.7)
Collecting blinker>=1.6.2 (from flask->TTS)
Downloading blinker-1.6.2-py3-none-any.whl (13 kB)
Requirement already satisfied: importlib-metadata>=3.6.0 in C:\python38\lib\site-packages (from flask->TTS) (6.0.0)
Collecting accelerate (from k-diffusion->TTS)
Downloading accelerate-0.23.0-py3-none-any.whl (258 kB)
---------------------------------------- 258.1/258.1 kB 4.0 MB/s eta 0:00:00
Collecting clean-fid (from k-diffusion->TTS)
Downloading clean_fid-0.1.35-py3-none-any.whl (26 kB)
Collecting clip-anytorch (from k-diffusion->TTS)
Downloading clip_anytorch-2.5.2-py3-none-any.whl (1.4 MB)
---------------------------------------- 1.4/1.4 MB 3.1 MB/s eta 0:00:00
Collecting dctorch (from k-diffusion->TTS)
Downloading dctorch-0.1.2-py3-none-any.whl (2.3 kB)
Collecting jsonmerge (from k-diffusion->TTS)
Downloading jsonmerge-1.9.2-py3-none-any.whl (19 kB)
Collecting kornia (from k-diffusion->TTS)
Downloading kornia-0.7.0-py2.py3-none-any.whl (705 kB)
---------------------------------------- 705.7/705.7 kB 3.0 MB/s eta 0:00:00
Requirement already satisfied: Pillow in C:\python38\lib\site-packages (from k-diffusion->TTS) (9.5.0)
Collecting rotary-embedding-torch (from k-diffusion->TTS)
Downloading rotary_embedding_torch-0.3.0-py3-none-any.whl (4.9 kB)
Collecting safetensors (from k-diffusion->TTS)
Downloading safetensors-0.3.3-cp38-cp38-win_amd64.whl (266 kB)
---------------------------------------- 266.3/266.3 kB 1.6 MB/s eta 0:00:00
Collecting scikit-image (from k-diffusion->TTS)
Downloading scikit_image-0.21.0-cp38-cp38-win_amd64.whl (22.7 MB)
---------------------------------------- 22.7/22.7 MB 944.0 kB/s eta 0:00:00
Collecting torchdiffeq (from k-diffusion->TTS)
Downloading torchdiffeq-0.2.3-py3-none-any.whl (31 kB)
Collecting torchsde (from k-diffusion->TTS)
Downloading torchsde-0.2.6-py3-none-any.whl (61 kB)
---------------------------------------- 61.2/61.2 kB ? eta 0:00:00
Collecting torchvision (from k-diffusion->TTS)
Downloading torchvision-0.15.2-cp38-cp38-win_amd64.whl (1.2 MB)
---------------------------------------- 1.2/1.2 MB 6.3 MB/s eta 0:00:00
Collecting wandb (from k-diffusion->TTS)
Downloading wandb-0.15.11-py3-none-any.whl (2.1 MB)
---------------------------------------- 2.1/2.1 MB 2.8 MB/s eta 0:00:00
Requirement already satisfied: contourpy>=1.0.1 in C:\python38\lib\site-packages (from matplotlib->TTS) (1.0.7)
Requirement already satisfied: cycler>=0.10 in C:\python38\lib\site-packages (from matplotlib->TTS) (0.10.0)
Requirement already satisfied: fonttools>=4.22.0 in C:\python38\lib\site-packages (from matplotlib->TTS) (4.38.0)
Requirement already satisfied: kiwisolver>=1.0.1 in C:\python38\lib\site-packages (from matplotlib->TTS) (1.3.1)
Requirement already satisfied: pyparsing>=2.2.1 in C:\python38\lib\site-packages (from matplotlib->TTS) (2.4.7)
Requirement already satisfied: python-dateutil>=2.7 in C:\python38\lib\site-packages (from matplotlib->TTS) (2.8.2)
Collecting regex>=2021.8.3 (from nltk->TTS)
Downloading regex-2023.8.8-cp38-cp38-win_amd64.whl (268 kB)
---------------------------------------- 268.3/268.3 kB 4.2 MB/s eta 0:00:00
Requirement already satisfied: pytz>=2020.1 in C:\python38\lib\site-packages (from pandas->TTS) (2021.1)
Collecting huggingface-hub<1.0,>=0.15.1 (from transformers->TTS)
Downloading huggingface_hub-0.17.3-py3-none-any.whl (295 kB)
---------------------------------------- 295.0/295.0 kB 1.1 MB/s eta 0:00:00
Requirement already satisfied: requests in C:\python38\lib\site-packages (from transformers->TTS) (2.31.0)
Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 (from transformers->TTS)
Downloading tokenizers-0.13.3-cp38-cp38-win_amd64.whl (3.5 MB)
---------------------------------------- 3.5/3.5 MB 4.7 MB/s eta 0:00:00
Requirement already satisfied: pycparser in C:\python38\lib\site-packages (from cffi>=1.0->soundfile->TTS) (2.20)
Requirement already satisfied: colorama in C:\python38\lib\site-packages (from click>=8.1.3->flask->TTS) (0.4.6)
Requirement already satisfied: six in C:\python38\lib\site-packages (from cycler>=0.10->matplotlib->TTS) (1.15.0)
Requirement already satisfied: tzlocal in C:\python38\lib\site-packages (from dateparser~=1.1.0->gruut[de,es,fr]==2.
Requirement already satisfied: zipp>=0.5 in C:\python38\lib\site-packages (from importlib-metadata>=3.6.0->flask->TT
Requirement already satisfied: MarkupSafe>=2.0 in C:\python38\lib\site-packages (from jinja2->torch>=1.7->TTS) (2.0.
Collecting docopt>=0.6.2 (from num2words<1.0.0,>=0.5.10->gruut[de,es,fr]==2.2.3->TTS)
Downloading docopt-0.6.2.tar.gz (25 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: appdirs>=1.3.0 in C:\python38\lib\site-packages (from pooch<1.7,>=1.0->librosa==0.10.
Requirement already satisfied: charset-normalizer<4,>=2 in C:\python38\lib\site-packages (from requests->transformer
Requirement already satisfied: idna<4,>=2.5 in C:\python38\lib\site-packages (from requests->transformers->TTS) (2.1
Requirement already satisfied: urllib3<3,>=1.21.1 in C:\python38\lib\site-packages (from requests->transformers->TTS
Requirement already satisfied: certifi>=2017.4.17 in C:\python38\lib\site-packages (from requests->transformers->TTS
Requirement already satisfied: threadpoolctl>=2.0.0 in C:\python38\lib\site-packages (from scikit-learn>=0.20.0->lib
Collecting MarkupSafe>=2.0 (from jinja2->torch>=1.7->TTS)
Downloading MarkupSafe-2.1.3-cp38-cp38-win_amd64.whl (17 kB)
Collecting ftfy (from clip-anytorch->k-diffusion->TTS)
Downloading ftfy-6.1.1-py3-none-any.whl (53 kB)
---------------------------------------- 53.1/53.1 kB 2.7 MB/s eta 0:00:00
INFO: pip is looking at multiple versions of dctorch to determine which version is compatible with other requirements. This could take a while.
Collecting dctorch (from k-diffusion->TTS)
Downloading dctorch-0.1.1-py3-none-any.whl (2.3 kB)
Downloading dctorch-0.1.0-py3-none-any.whl (2.3 kB)
Collecting clean-fid (from k-diffusion->TTS)
Downloading clean_fid-0.1.34-py3-none-any.whl (26 kB)
Collecting requests (from transformers->TTS)
Using cached requests-2.25.1-py2.py3-none-any.whl (61 kB)
Collecting clean-fid (from k-diffusion->TTS)
Downloading clean_fid-0.1.33-py3-none-any.whl (25 kB)
INFO: pip is looking at multiple versions of dctorch to determine which version is compatible with other requirements. This could take a while.
Downloading clean_fid-0.1.32-py3-none-any.whl (26 kB)
Downloading clean_fid-0.1.31-py3-none-any.whl (24 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io
u want to abort this run, press Ctrl + C.
Downloading clean_fid-0.1.30-py3-none-any.whl (24 kB)
Downloading clean_fid-0.1.29-py3-none-any.whl (24 kB)
Downloading clean_fid-0.1.28-py3-none-any.whl (23 kB)
Downloading clean_fid-0.1.26-py3-none-any.whl (23 kB)
Downloading clean_fid-0.1.25-py3-none-any.whl (23 kB)
Downloading clean_fid-0.1.24-py3-none-any.whl (23 kB)
Downloading clean_fid-0.1.23-py3-none-any.whl (23 kB)
Downloading clean_fid-0.1.22-py3-none-any.whl (23 kB)
Downloading clean_fid-0.1.21-py3-none-any.whl (23 kB)
Downloading clean_fid-0.1.19-py3-none-any.whl (23 kB)
Downloading clean_fid-0.1.18-py3-none-any.whl (23 kB)
Downloading clean_fid-0.1.17-py3-none-any.whl (23 kB)
Downloading clean_fid-0.1.16-py3-none-any.whl (22 kB)
Downloading clean_fid-0.1.15-py3-none-any.whl (22 kB)
Downloading clean_fid-0.1.14-py3-none-any.whl (22 kB)
Downloading clean_fid-0.1.13-py3-none-any.whl (19 kB)
Downloading clean_fid-0.1.12-py3-none-any.whl (19 kB)
Downloading clean_fid-0.1.11-py3-none-any.whl (19 kB)
Downloading clean_fid-0.1.10-py3-none-any.whl (16 kB)
Downloading clean_fid-0.1.9-py3-none-any.whl (15 kB)
Downloading clean_fid-0.1.8-py3-none-any.whl (16 kB)
Downloading clean_fid-0.1.6-py3-none-any.whl (15 kB)
Collecting accelerate (from k-diffusion->TTS)
Downloading accelerate-0.22.0-py3-none-any.whl (251 kB)
---------------------------------------- 251.2/251.2 kB 15.1 MB/s eta 0:00:00
Downloading accelerate-0.21.0-py3-none-any.whl (244 kB)
---------------------------------------- 244.2/244.2 kB 7.3 MB/s eta 0:00:00
Downloading accelerate-0.20.3-py3-none-any.whl (227 kB)
---------------------------------------- 227.6/227.6 kB 6.8 MB/s eta 0:00:00
Downloading accelerate-0.20.2-py3-none-any.whl (227 kB)
---------------------------------------- 227.5/227.5 kB 2.8 MB/s eta 0:00:00
Downloading accelerate-0.20.1-py3-none-any.whl (227 kB)
---------------------------------------- 227.5/227.5 kB 2.8 MB/s eta 0:00:00
Downloading accelerate-0.20.0-py3-none-any.whl (227 kB)
---------------------------------------- 227.4/227.4 kB 7.0 MB/s eta 0:00:00
Downloading accelerate-0.19.0-py3-none-any.whl (219 kB)
---------------------------------------- 219.1/219.1 kB 13.1 MB/s eta 0:00:00
Downloading accelerate-0.18.0-py3-none-any.whl (215 kB)
---------------------------------------- 215.3/215.3 kB 3.3 MB/s eta 0:00:00
Downloading accelerate-0.17.1-py3-none-any.whl (212 kB)
---------------------------------------- 212.8/212.8 kB 13.5 MB/s eta 0:00:00
Downloading accelerate-0.17.0-py3-none-any.whl (212 kB)
---------------------------------------- 212.8/212.8 kB 6.5 MB/s eta 0:00:00
Downloading accelerate-0.16.0-py3-none-any.whl (199 kB)
---------------------------------------- 199.7/199.7 kB 11.8 MB/s eta 0:00:00
Downloading accelerate-0.15.0-py3-none-any.whl (191 kB)
---------------------------------------- 191.5/191.5 kB 12.1 MB/s eta 0:00:00
Downloading accelerate-0.14.0-py3-none-any.whl (175 kB)
---------------------------------------- 176.0/176.0 kB 11.1 MB/s eta 0:00:00
Downloading accelerate-0.13.2-py3-none-any.whl (148 kB)
---------------------------------------- 148.8/148.8 kB 4.5 MB/s eta 0:00:00
Downloading accelerate-0.13.1-py3-none-any.whl (148 kB)
---------------------------------------- 148.8/148.8 kB 9.2 MB/s eta 0:00:00
Downloading accelerate-0.13.0-py3-none-any.whl (148 kB)
---------------------------------------- 148.8/148.8 kB 2.9 MB/s eta 0:00:00
Downloading accelerate-0.12.0-py3-none-any.whl (143 kB)
---------------------------------------- 144.0/144.0 kB 8.9 MB/s eta 0:00:00
Downloading accelerate-0.11.0-py3-none-any.whl (123 kB)
---------------------------------------- 123.1/123.1 kB 7.1 MB/s eta 0:00:00
Downloading accelerate-0.10.0-py3-none-any.whl (117 kB)
---------------------------------------- 117.1/117.1 kB ? eta 0:00:00
Downloading accelerate-0.9.0-py3-none-any.whl (106 kB)
---------------------------------------- 106.8/106.8 kB 3.1 MB/s eta 0:00:00
Downloading accelerate-0.8.0-py3-none-any.whl (114 kB)
---------------------------------------- 114.5/114.5 kB 6.5 MB/s eta 0:00:00
Downloading accelerate-0.7.1-py3-none-any.whl (79 kB)
---------------------------------------- 79.9/79.9 kB 2.2 MB/s eta 0:00:00
Downloading accelerate-0.7.0-py3-none-any.whl (79 kB)
---------------------------------------- 79.8/79.8 kB 4.3 MB/s eta 0:00:00
Downloading accelerate-0.6.2-py3-none-any.whl (65 kB)
---------------------------------------- 65.9/65.9 kB ? eta 0:00:00
Downloading accelerate-0.6.1-py3-none-any.whl (65 kB)
---------------------------------------- 65.9/65.9 kB 1.8 MB/s eta 0:00:00
Downloading accelerate-0.6.0-py3-none-any.whl (65 kB)
---------------------------------------- 65.8/65.8 kB 3.5 MB/s eta 0:00:00
Downloading accelerate-0.5.1-py3-none-any.whl (58 kB)
---------------------------------------- 58.0/58.0 kB 1.5 MB/s eta 0:00:00
Downloading accelerate-0.5.0-py3-none-any.whl (57 kB)
---------------------------------------- 58.0/58.0 kB 757.7 kB/s eta 0:00:00
Downloading accelerate-0.4.0-py3-none-any.whl (55 kB)
---------------------------------------- 55.3/55.3 kB 221.9 kB/s eta 0:00:00
Collecting soxr>=0.3.2 (from librosa==0.10.0.*->TTS)
Downloading soxr-0.3.5-cp38-cp38-win_amd64.whl (184 kB)
---------------------------------------- 184.4/184.4 kB 11.6 MB/s eta 0:00:00
Downloading soxr-0.3.4-cp38-cp38-win_amd64.whl (184 kB)
---------------------------------------- 184.8/184.8 kB 3.8 MB/s eta 0:00:00
Downloading soxr-0.3.3-cp38-cp38-win_amd64.whl (176 kB)
---------------------------------------- 176.7/176.7 kB 11.1 MB/s eta 0:00:00
Downloading soxr-0.3.2-cp38-cp38-win_amd64.whl (176 kB)
---------------------------------------- 176.7/176.7 kB ? eta 0:00:00
Collecting scikit-learn>=0.20.0 (from librosa==0.10.0.*->TTS)
Downloading scikit_learn-1.3.1-cp38-cp38-win_amd64.whl (9.3 MB)
---------------------------------------- 9.3/9.3 MB 3.5 MB/s eta 0:00:00
Collecting joblib>=0.14 (from librosa==0.10.0.*->TTS)
Downloading joblib-1.3.2-py3-none-any.whl (302 kB)
---------------------------------------- 302.2/302.2 kB 6.2 MB/s eta 0:00:00
Collecting scikit-learn>=0.20.0 (from librosa==0.10.0.*->TTS)
Downloading scikit_learn-1.3.0-cp38-cp38-win_amd64.whl (9.2 MB)
---------------------------------------- 9.2/9.2 MB 2.1 MB/s eta 0:00:00
Downloading scikit_learn-1.2.2-cp38-cp38-win_amd64.whl (8.3 MB)
---------------------------------------- 8.3/8.3 MB 2.6 MB/s eta 0:00:00
Downloading scikit_learn-1.2.1-cp38-cp38-win_amd64.whl (8.3 MB)
---------------------------------------- 8.3/8.3 MB 2.6 MB/s eta 0:00:00
Downloading scikit_learn-1.2.0-cp38-cp38-win_amd64.whl (8.2 MB)
---------------------------------------- 8.2/8.2 MB 4.1 MB/s eta 0:00:00
Downloading scikit_learn-1.1.3-cp38-cp38-win_amd64.whl (7.5 MB)
---------------------------------------- 7.5/7.5 MB 4.5 MB/s eta 0:00:00
Downloading scikit_learn-1.1.2-cp38-cp38-win_amd64.whl (7.3 MB)
---------------------------------------- 7.3/7.3 MB 3.9 MB/s eta 0:00:00
Downloading scikit_learn-1.1.1-cp38-cp38-win_amd64.whl (7.3 MB)
---------------------------------------- 7.3/7.3 MB 3.5 MB/s eta 0:00:00
Downloading scikit_learn-1.1.0-cp38-cp38-win_amd64.whl (7.3 MB)
---------------------------------------- 7.3/7.3 MB 3.3 MB/s eta 0:00:00
Using cached scikit_learn-1.0.2-cp38-cp38-win_amd64.whl (7.2 MB)
Downloading scikit_learn-1.0.1-cp38-cp38-win_amd64.whl (7.2 MB)
---------------------------------------- 7.2/7.2 MB 4.0 MB/s eta 0:00:00
Downloading scikit_learn-1.0-cp38-cp38-win_amd64.whl (7.2 MB)
---------------------------------------- 7.2/7.2 MB 4.2 MB/s eta 0:00:00
Downloading scikit_learn-0.24.2-cp38-cp38-win_amd64.whl (6.9 MB)
---------------------------------------- 6.9/6.9 MB 2.6 MB/s eta 0:00:00
Downloading scikit_learn-0.24.1-cp38-cp38-win_amd64.whl (6.9 MB)
---------------------------------------- 6.9/6.9 MB 4.2 MB/s eta 0:00:00
Downloading scikit_learn-0.24.0-cp38-cp38-win_amd64.whl (6.9 MB)
---------------------------------------- 6.9/6.9 MB 3.1 MB/s eta 0:00:00
Downloading scikit_learn-0.23.2-cp38-cp38-win_amd64.whl (6.8 MB)
---------------------------------------- 6.8/6.8 MB 2.7 MB/s eta 0:00:00
Downloading scikit_learn-0.23.1-cp38-cp38-win_amd64.whl (6.8 MB)
---------------------------------------- 6.8/6.8 MB 3.7 MB/s eta 0:00:00
Downloading scikit_learn-0.23.0-cp38-cp38-win_amd64.whl (6.8 MB)
---------------------------------------- 6.8/6.8 MB 4.3 MB/s eta 0:00:00
Downloading scikit_learn-0.22.2.post1-cp38-cp38-win_amd64.whl (6.6 MB)
---------------------------------------- 6.6/6.6 MB 4.0 MB/s eta 0:00:00
Downloading scikit_learn-0.22.2-cp38-cp38-win_amd64.whl (6.6 MB)
---------------------------------------- 6.6/6.6 MB 2.9 MB/s eta 0:00:00
Downloading scikit_learn-0.22.1-cp38-cp38-win_amd64.whl (6.4 MB)
---------------------------------------- 6.4/6.4 MB 3.0 MB/s eta 0:00:00
Downloading scikit_learn-0.22-cp38-cp38-win_amd64.whl (6.3 MB)
---------------------------------------- 6.3/6.3 MB 3.5 MB/s eta 0:00:00
Collecting pynndescent>=0.5 (from umap-learn==0.5.1->TTS)
Downloading pynndescent-0.5.9.tar.gz (1.1 MB)
---------------------------------------- 1.1/1.1 MB 3.4 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting contourpy>=1.0.1 (from matplotlib->TTS)
Downloading contourpy-1.1.1-cp38-cp38-win_amd64.whl (477 kB)
---------------------------------------- 477.9/477.9 kB 7.4 MB/s eta 0:00:00
Downloading contourpy-1.1.0-cp38-cp38-win_amd64.whl (470 kB)
---------------------------------------- 470.4/470.4 kB 7.3 MB/s eta 0:00:00
Using cached contourpy-1.0.7-cp38-cp38-win_amd64.whl (162 kB)
Downloading contourpy-1.0.6-cp38-cp38-win_amd64.whl (163 kB)
---------------------------------------- 163.5/163.5 kB 2.5 MB/s eta 0:00:00
Downloading contourpy-1.0.5-cp38-cp38-win_amd64.whl (164 kB)
---------------------------------------- 164.0/164.0 kB 5.0 MB/s eta 0:00:00
Downloading contourpy-1.0.4-cp38-cp38-win_amd64.whl (162 kB)
---------------------------------------- 162.5/162.5 kB 9.5 MB/s eta 0:00:00
Downloading contourpy-1.0.3-cp38-cp38-win_amd64.whl (159 kB)
---------------------------------------- 159.8/159.8 kB 9.3 MB/s eta 0:00:00
Downloading contourpy-1.0.2-cp38-cp38-win_amd64.whl (158 kB)
---------------------------------------- 158.1/158.1 kB 9.9 MB/s eta 0:00:00
Downloading contourpy-1.0.1-cp38-cp38-win_amd64.whl (158 kB)
---------------------------------------- 158.1/158.1 kB 9.2 MB/s eta 0:00:00
Collecting transformers (from TTS)
Downloading transformers-4.33.2-py3-none-any.whl (7.6 MB)
---------------------------------------- 7.6/7.6 MB 3.4 MB/s eta 0:00:00
Downloading transformers-4.33.1-py3-none-any.whl (7.6 MB)
---------------------------------------- 7.6/7.6 MB 3.5 MB/s eta 0:00:00
Downloading transformers-4.33.0-py3-none-any.whl (7.6 MB)
---------------------------------------- 7.6/7.6 MB 4.6 MB/s eta 0:00:00
Downloading transformers-4.32.1-py3-none-any.whl (7.5 MB)
---------------------------------------- 7.5/7.5 MB 2.6 MB/s eta 0:00:00
Downloading transformers-4.32.0-py3-none-any.whl (7.5 MB)
---------------------------------------- 7.5/7.5 MB 3.5 MB/s eta 0:00:00
Downloading transformers-4.31.0-py3-none-any.whl (7.4 MB)
---------------------------------------- 7.4/7.4 MB 3.6 MB/s eta 0:00:00
Downloading transformers-4.30.2-py3-none-any.whl (7.2 MB)
---------------------------------------- 7.2/7.2 MB 3.5 MB/s eta 0:00:00
Downloading transformers-4.30.1-py3-none-any.whl (7.2 MB)
---------------------------------------- 7.2/7.2 MB 4.3 MB/s eta 0:00:00
Downloading transformers-4.30.0-py3-none-any.whl (7.2 MB)
---------------------------------------- 7.2/7.2 MB 3.7 MB/s eta 0:00:00
Downloading transformers-4.29.2-py3-none-any.whl (7.1 MB)
---------------------------------------- 7.1/7.1 MB 4.5 MB/s eta 0:00:00
Downloading transformers-4.29.1-py3-none-any.whl (7.1 MB)
---------------------------------------- 7.1/7.1 MB 2.4 MB/s eta 0:00:00
Downloading transformers-4.29.0-py3-none-any.whl (7.1 MB)
---------------------------------------- 7.1/7.1 MB 4.0 MB/s eta 0:00:00
Downloading transformers-4.28.1-py3-none-any.whl (7.0 MB)
---------------------------------------- 7.0/7.0 MB 3.3 MB/s eta 0:00:00
Downloading transformers-4.28.0-py3-none-any.whl (7.0 MB)
---------------------------------------- 7.0/7.0 MB 4.1 MB/s eta 0:00:00
Downloading transformers-4.27.4-py3-none-any.whl (6.8 MB)
---------------------------------------- 6.8/6.8 MB 2.8 MB/s eta 0:00:00
Downloading transformers-4.27.3-py3-none-any.whl (6.8 MB)
---------------------------------------- 6.8/6.8 MB 3.8 MB/s eta 0:00:00
Downloading transformers-4.27.2-py3-none-any.whl (6.8 MB)
---------------------------------------- 6.8/6.8 MB 3.2 MB/s eta 0:00:00
Downloading transformers-4.27.1-py3-none-any.whl (6.7 MB)
---------------------------------------- 6.7/6.7 MB 3.7 MB/s eta 0:00:00
Downloading transformers-4.27.0-py3-none-any.whl (6.8 MB)
---------------------------------------- 6.8/6.8 MB 4.7 MB/s eta 0:00:00
Downloading transformers-4.26.1-py3-none-any.whl (6.3 MB)
---------------------------------------- 6.3/6.3 MB 3.6 MB/s eta 0:00:00
Downloading transformers-4.26.0-py3-none-any.whl (6.3 MB)
---------------------------------------- 6.3/6.3 MB 4.5 MB/s eta 0:00:00
Downloading transformers-4.25.1-py3-none-any.whl (5.8 MB)
---------------------------------------- 5.8/5.8 MB 4.6 MB/s eta 0:00:00
Downloading transformers-4.24.0-py3-none-any.whl (5.5 MB)
---------------------------------------- 5.5/5.5 MB 3.5 MB/s eta 0:00:00
Downloading transformers-4.23.1-py3-none-any.whl (5.3 MB)
---------------------------------------- 5.3/5.3 MB 2.5 MB/s eta 0:00:00
Downloading transformers-4.23.0-py3-none-any.whl (5.3 MB)
---------------------------------------- 5.3/5.3 MB 2.6 MB/s eta 0:00:00
Downloading transformers-4.22.2-py3-none-any.whl (4.9 MB)
---------------------------------------- 4.9/4.9 MB 4.3 MB/s eta 0:00:00
Collecting tokenizers!=0.11.3,<0.13,>=0.11.1 (from transformers->TTS)
Downloading tokenizers-0.12.1-cp38-cp38-win_amd64.whl (3.3 MB)
---------------------------------------- 3.3/3.3 MB 3.3 MB/s eta 0:00:00
Collecting transformers (from TTS)
Downloading transformers-4.22.1-py3-none-any.whl (4.9 MB)
---------------------------------------- 4.9/4.9 MB 3.5 MB/s eta 0:00:00
Downloading transformers-4.22.0-py3-none-any.whl (4.9 MB)
---------------------------------------- 4.9/4.9 MB 2.4 MB/s eta 0:00:00
Downloading transformers-4.21.3-py3-none-any.whl (4.7 MB)
---------------------------------------- 4.7/4.7 MB 3.9 MB/s eta 0:00:00
Downloading transformers-4.21.2-py3-none-any.whl (4.7 MB)
---------------------------------------- 4.7/4.7 MB 2.6 MB/s eta 0:00:00
Downloading transformers-4.21.1-py3-none-any.whl (4.7 MB)
---------------------------------------- 4.7/4.7 MB 4.0 MB/s eta 0:00:00
Downloading transformers-4.21.0-py3-none-any.whl (4.7 MB)
---------------------------------------- 4.7/4.7 MB 4.4 MB/s eta 0:00:00
Downloading transformers-4.20.1-py3-none-any.whl (4.4 MB)
---------------------------------------- 4.4/4.4 MB 2.9 MB/s eta 0:00:00
Downloading transformers-4.20.0-py3-none-any.whl (4.4 MB)
---------------------------------------- 4.4/4.4 MB 3.9 MB/s eta 0:00:00
Downloading transformers-4.19.4-py3-none-any.whl (4.2 MB)
---------------------------------------- 4.2/4.2 MB 3.1 MB/s eta 0:00:00
Downloading transformers-4.19.3-py3-none-any.whl (4.2 MB)
---------------------------------------- 4.2/4.2 MB 2.8 MB/s eta 0:00:00
Downloading transformers-4.19.2-py3-none-any.whl (4.2 MB)
---------------------------------------- 4.2/4.2 MB 3.6 MB/s eta 0:00:00
Downloading transformers-4.19.1-py3-none-any.whl (4.2 MB)
---------------------------------------- 4.2/4.2 MB 3.4 MB/s eta 0:00:00
Downloading transformers-4.19.0-py3-none-any.whl (4.2 MB)
---------------------------------------- 4.2/4.2 MB 3.9 MB/s eta 0:00:00
Downloading transformers-4.18.0-py3-none-any.whl (4.0 MB)
---------------------------------------- 4.0/4.0 MB 3.8 MB/s eta 0:00:00
Collecting sacremoses (from transformers->TTS)
Downloading sacremoses-0.0.53.tar.gz (880 kB)
---------------------------------------- 880.6/880.6 kB 5.1 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting transformers (from TTS)
Downloading transformers-4.17.0-py3-none-any.whl (3.8 MB)
---------------------------------------- 3.8/3.8 MB 4.4 MB/s eta 0:00:00
Collecting tokenizers!=0.11.3,>=0.11.1 (from transformers->TTS)
Downloading tokenizers-0.14.0-cp38-none-win_amd64.whl (2.2 MB)
---------------------------------------- 2.2/2.2 MB 3.6 MB/s eta 0:00:00
Collecting transformers (from TTS)
Downloading transformers-4.16.2-py3-none-any.whl (3.5 MB)
---------------------------------------- 3.5/3.5 MB 4.4 MB/s eta 0:00:00
Downloading transformers-4.16.1-py3-none-any.whl (3.5 MB)
---------------------------------------- 3.5/3.5 MB 2.5 MB/s eta 0:00:00
Downloading transformers-4.16.0-py3-none-any.whl (3.5 MB)
---------------------------------------- 3.5/3.5 MB 3.5 MB/s eta 0:00:00
Downloading transformers-4.15.0-py3-none-any.whl (3.4 MB)
---------------------------------------- 3.4/3.4 MB 2.8 MB/s eta 0:00:00
Collecting tokenizers<0.11,>=0.10.1 (from transformers->TTS)
Downloading tokenizers-0.10.3-cp38-cp38-win_amd64.whl (2.0 MB)
---------------------------------------- 2.0/2.0 MB 4.1 MB/s eta 0:00:00
Collecting transformers (from TTS)
Downloading transformers-4.14.1-py3-none-any.whl (3.4 MB)
---------------------------------------- 3.4/3.4 MB 2.8 MB/s eta 0:00:00
Downloading transformers-4.13.0-py3-none-any.whl (3.3 MB)
---------------------------------------- 3.3/3.3 MB 4.0 MB/s eta 0:00:00
Downloading transformers-4.12.5-py3-none-any.whl (3.1 MB)
---------------------------------------- 3.1/3.1 MB 3.3 MB/s eta 0:00:00
Downloading transformers-4.12.4-py3-none-any.whl (3.1 MB)
---------------------------------------- 3.1/3.1 MB 2.4 MB/s eta 0:00:00
Downloading transformers-4.12.3-py3-none-any.whl (3.1 MB)
---------------------------------------- 3.1/3.1 MB 3.7 MB/s eta 0:00:00
Downloading transformers-4.12.2-py3-none-any.whl (3.1 MB)
---------------------------------------- 3.1/3.1 MB 3.0 MB/s eta 0:00:00
Downloading transformers-4.12.1-py3-none-any.whl (3.1 MB)
---------------------------------------- 3.1/3.1 MB 3.0 MB/s eta 0:00:00
Downloading transformers-4.12.0-py3-none-any.whl (3.1 MB)
---------------------------------------- 3.1/3.1 MB 2.9 MB/s eta 0:00:00
Downloading transformers-4.11.3-py3-none-any.whl (2.9 MB)
---------------------------------------- 2.9/2.9 MB 2.9 MB/s eta 0:00:00
Downloading transformers-4.11.2-py3-none-any.whl (2.9 MB)
---------------------------------------- 2.9/2.9 MB 3.1 MB/s eta 0:00:00
Downloading transformers-4.11.1-py3-none-any.whl (2.9 MB)
---------------------------------------- 2.9/2.9 MB 4.3 MB/s eta 0:00:00
Downloading transformers-4.11.0-py3-none-any.whl (2.9 MB)
---------------------------------------- 2.9/2.9 MB 3.0 MB/s eta 0:00:00
Downloading transformers-4.10.3-py3-none-any.whl (2.8 MB)
---------------------------------------- 2.8/2.8 MB 2.2 MB/s eta 0:00:00
Downloading transformers-4.10.2-py3-none-any.whl (2.8 MB)
---------------------------------------- 2.8/2.8 MB 2.8 MB/s eta 0:00:00
Downloading transformers-4.10.1-py3-none-any.whl (2.8 MB)
---------------------------------------- 2.8/2.8 MB 3.4 MB/s eta 0:00:00
Downloading transformers-4.10.0-py3-none-any.whl (2.8 MB)
---------------------------------------- 2.8/2.8 MB 3.2 MB/s eta 0:00:00
Downloading transformers-4.9.2-py3-none-any.whl (2.6 MB)
---------------------------------------- 2.6/2.6 MB 4.0 MB/s eta 0:00:00
Collecting huggingface-hub==0.0.12 (from transformers->TTS)
Downloading huggingface_hub-0.0.12-py3-none-any.whl (37 kB)
Collecting transformers (from TTS)
Downloading transformers-4.9.1-py3-none-any.whl (2.6 MB)
---------------------------------------- 2.6/2.6 MB 4.1 MB/s eta 0:00:00
Downloading transformers-4.9.0-py3-none-any.whl (2.6 MB)
---------------------------------------- 2.6/2.6 MB 4.4 MB/s eta 0:00:00
Downloading transformers-4.8.2-py3-none-any.whl (2.5 MB)
---------------------------------------- 2.5/2.5 MB 3.2 MB/s eta 0:00:00
Downloading transformers-4.8.1-py3-none-any.whl (2.5 MB)
---------------------------------------- 2.5/2.5 MB 2.6 MB/s eta 0:00:00
Downloading transformers-4.8.0-py3-none-any.whl (2.5 MB)
---------------------------------------- 2.5/2.5 MB 2.9 MB/s eta 0:00:00
Downloading transformers-4.7.0-py3-none-any.whl (2.5 MB)
---------------------------------------- 2.5/2.5 MB 4.2 MB/s eta 0:00:00
Collecting huggingface-hub==0.0.8 (from transformers->TTS)
Downloading huggingface_hub-0.0.8-py3-none-any.whl (34 kB)
Collecting transformers (from TTS)
Downloading transformers-4.6.1-py3-none-any.whl (2.2 MB)
---------------------------------------- 2.2/2.2 MB 4.1 MB/s eta 0:00:00
Downloading transformers-4.6.0-py3-none-any.whl (2.3 MB)
---------------------------------------- 2.3/2.3 MB 4.5 MB/s eta 0:00:00
Downloading transformers-4.5.1-py3-none-any.whl (2.1 MB)
---------------------------------------- 2.1/2.1 MB 4.1 MB/s eta 0:00:00
Downloading transformers-4.5.0-py3-none-any.whl (2.1 MB)
---------------------------------------- 2.1/2.1 MB 2.9 MB/s eta 0:00:00
Downloading transformers-4.4.2-py3-none-any.whl (2.0 MB)
---------------------------------------- 2.0/2.0 MB 2.4 MB/s eta 0:00:00
Downloading transformers-4.4.1-py3-none-any.whl (2.1 MB)
---------------------------------------- 2.1/2.1 MB 3.3 MB/s eta 0:00:00
Downloading transformers-4.4.0-py3-none-any.whl (2.1 MB)
---------------------------------------- 2.1/2.1 MB 1.9 MB/s eta 0:00:00
Downloading transformers-4.3.3-py3-none-any.whl (1.9 MB)
---------------------------------------- 1.9/1.9 MB 3.3 MB/s eta 0:00:00
Downloading transformers-4.3.2-py3-none-any.whl (1.8 MB)
---------------------------------------- 1.8/1.8 MB 3.8 MB/s eta 0:00:00
Downloading transformers-4.3.1-py3-none-any.whl (1.8 MB)
---------------------------------------- 1.8/1.8 MB 4.1 MB/s eta 0:00:00
ERROR: Exception:
Traceback (most recent call last):
File "C:\Python38\lib\site-packages\pip\_internal\cli\base_command.py", line 169, in exc_logging_wrapper
status = run_func(*args)
File "C:\Python38\lib\site-packages\pip\_internal\cli\req_command.py", line 248, in wrapper
return func(self, options, args)
File "C:\Python38\lib\site-packages\pip\_internal\commands\install.py", line 377, in run
requirement_set = resolver.resolve(
File "C:\Python38\lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py", line 92, in resolve
result = self._result = resolver.resolve(
File "C:\Python38\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 546, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "C:\Python38\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 457, in resolve
raise ResolutionTooDeep(max_rounds)
pip._vendor.resolvelib.resolvers.ResolutionTooDeep: 200000
```
**3. See error message:** `pip._vendor.resolvelib.resolvers.ResolutionTooDeep: 200000`
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
nothing installed yet, just trying to do it on Windows 7, Python 3.8
```
### Additional context
_No response_ | 2hard
|
Title: BleakDotNetTaskError Could not get GATT characteristics AccessDenied
Body: * bleak version: 0.12.1
* Python version: 3.9
* Operating System: Win 10 [Version 10.0.19042.1083]
* BlueZ version (`bluetoothctl -v`) in case of Linux:
* Bluetooth Firmware Version: HCI 8.256 / LMP 8.256
### Description
Similar to Issue #257 #222 From what I understand
I am trying to connect to a BLE Device using example code and get exceptions.
For reference I have previously interfaced with the device using closed source software without issues with the same hardware.
Noteworthy is that the device contains three characteristics with the same UUIDs related to the HID Service since the HID service seems to be the thing causing trouble.
### What I Did
Running the example code for Services I get the following output:
```
Traceback (most recent call last):
File "C:\Users\HP\PycharmProjects\Measure\venv\BLE-test3.py", line 32, in <module>
loop.run_until_complete(print_services(ADDRESS))
File "C:\Users\HP\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 642, in run_until_complete
return future.result()
File "C:\Users\HP\PycharmProjects\Measure\venv\BLE-test3.py", line 24, in print_services
async with BleakClient(mac_addr) as client:
File "C:\Users\HP\PycharmProjects\Measure\venv\lib\site-packages\bleak\backends\client.py", line 61, in __aenter__
await self.connect()
File "C:\Users\HP\PycharmProjects\Measure\venv\lib\site-packages\bleak\backends\winrt\client.py", line 227, in connect
await self.get_services(use_cached=use_cached)
File "C:\Users\HP\PycharmProjects\Measure\venv\lib\site-packages\bleak\backends\winrt\client.py", line 449, in get_services
raise BleakDotNetTaskError(
bleak.exc.BleakDotNetTaskError: Could not get GATT characteristics for <_winrt_Windows_Devices_Bluetooth_GenericAttributeProfile.GattDeviceService object at 0x000001A15CF7F290>: AccessDenied
```
By commenting out the ``` raise BleakDotNetTaskError( ``` in the file [winrt\client.py](https://github.com/hbldh/bleak/blob/7e0fdae6c0f6a78713e5984c2840666e0c38c3f3/bleak/backends/winrt/client.py#L449-L454) that the traceback is refering to, Bleak seems to work fairly normal, with the exception that the HID service has no characteristics.
| 1medium
|
Title: hanlp+jupyter็docker้ๅ
Body: **Describe the feature and the current behavior/state.**
็ฎๅๅฎๆนๆๆกฃ้ๆฒกๆๆไพๆดๅฟซไธๆHanLP ็ๆนๅผ๏ผๆไปฅๆๅไบไธไธชHanLP + Jupyter ็Docker้ๅ๏ผๅฏไปฅๅธฎๅฉๆๅ
ด่ถฃ็ไบบๆดๅฟซไธๆไฝ้ชใ
walterinsh/hanlp:2.0.0a41-jupyter
[https://github.com/WalterInSH/hanlp-jupyter-docker](https://github.com/WalterInSH/hanlp-jupyter-docker)
ๅฆๆๆปก่ถณไฝ ไปฌ็ๆๆ๏ผๅฏไปฅๅ ๅจๆๆกฃ้ใ
**Will this change the current api? How?**
No
**Who will benefit with this feature?**
ไผไฝฟ็จdocker๏ผๆๆๅฟซ้ๅฐ่ฏ็ไบบ
**Are you willing to contribute it (Yes/No):**
yes
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Debian
- Python version: 3.6
- HanLP version: 2.0.0a41
**Any other info**
* [x] I've carefully completed this form.
| 1medium
|
Title: ๆฐๅขswift่ฏญ่จๆถๅๅบ็ฐ็้ฎ้ขใ
Body: ๅจlanguagesๅ
ๆทปๅ ไบ
_swift_lang_config = {
"run": {
"exe_name": "t.swift",
"command": "/usr/bin/swift {exe_path}",
"seccomp_rule": None,
}
}
ๅนถไธๅจJudgeServer ๅฎนๅจๅ
ๅฎ่ฃ
ไบswift็ฏๅข๏ผๅนถไธๅจๅฎนๅจๅ
ๅฏ่ฟ่กswiftไปฃ็ ใ๏ผswiftไนๆ ้็ผ่ฏๅฏ่ฟ่ก๏ผ
ไฝๆฏๆ็ปๅจ้กน็ฎไธญ่ฟ่กๆถ๏ผไผๅบ็ฐ่ฟ่กๆถ้่ฏฏใ
ๆ็ปๅจๆญคๆฌก่ฟ่ก็ๆไปถๅฐๅ๏ผๅจๆ่ฝฝ็judege_server/run/run็็ฎๆ ๆไปถไธญ๏ผ๏ผๅฏปๆพๅฐไธไธช้่ฏฏใ1.outๆไปถๅ
ๆพ็คบ๏ผ<unknown>:0: error: unable to open output file '/home/code/.cache/clang/ModuleCache/VXKMIN1Y83K6/SwiftShims-1KFO504FT44T.pcm': 'No such file or directory'
<unknown>:0: error: could not build C module 'SwiftShims'ใ
่ฏท้ฎ๏ผ่ฏฅๅฆไฝ่งฃๅณๆญค้กน้ฎ้ข๏ผ
| 1medium
|
Title: installer ๆๅ
ๆถๅไผๆ็คบ้่ฏฏ
Body: 1.2.62ๅฏไปฅๆๅ
ๆๅ,ไฝๆฏๅจ่ฏทๆฑๆฐๆฎ็ๆถๅไผๆฅ403้่ฏฏ
1.2.84ๆๅ
็ๆถๅๆฅ้
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd7 in position 2556: invalid continuation byte | 1medium
|
Title: Docs are not clear about installation requirements
Body: The docs say:
> Eve is powered by Flask, Redis, Cerberus, Events
but it does not indicate if all of those are required.
Specifically, I have failed to find anywhere in the docs if Redis is an optional dependency.
Looking into the requirements.txt and reading usage samples of Redis, `app = Eve()` vs `app = Eve(redis=r)` also suggests Redis is optional.
However, that is too many hoops for those new to Eve to conclude about Redis requirement - many may give up having no option to host Redis. For example, PythonAnywhere users: [I couldn't tell from their docs whether redis is a hard requirement or something that you can use for some features](https://www.pythonanywhere.com/forums/topic/3730/#id_post_18968)).
| 1medium
|
Title: Preventing useless queries when listing entities
Body: Hello,
On an application with about 100k entries, listing them takes minutes. This surprised me because listing entries should be fairly quick, even if there are many of them. It appears that **for each entry** it produces a query to **each** relationship. This makes a huge number of queries. To understand if I did something wrong, I started from your own example in the documentation. I created 100 computers and 100 persons, related to a computer.
Then I listed all the computers (with `/computers?page[size]=0`) and I asked SQLAlchemy to log every query. This confirmed that I had one `SELECT` on the `computer` table and as many `SELECT` on the `person` table as there are owner of a computer. For instance, one of them:
```
INFO:sqlalchemy.engine.base.Engine:SELECT person.id AS person_id, person.name AS person_name, person.email AS person_email, person.birth_date AS person_birth_date, person.password AS person_password
FROM person
WHERE person.id = ?
INFO:sqlalchemy.engine.base.Engine:(19,)
```
First: why is this query necessary? I mean the listing doesn't provide the detail of the person, so why retrieving this data? How could we prevent Flask-REST-JSONAPI from retrieving it?
Second: if this query is necessary, why don't you have a join?
Third: can I prevent this from happening to prevent huge efficiency losses?
Thanks a lot! | 2hard
|
Title: ๅจไฝฟ็จๆๆฌ็ธไผผๅบฆๆฏ่พๆถ๏ผไธคไธชๅญ็ฌฆไธฒไบคๆขไธไธไฝ็ฝฎ๏ผๅพๅบๆฅ็ๆๆฌ็ธไผผๅบฆไธไธๆ ท
Body: <!--
ๆ่ฐขๆพๅบbug๏ผ่ฏท่ฎค็ๅกซๅไธ่กจ๏ผ
-->
**Describe the bug**
ๅจไฝฟ็จๆๆฌ็ธไผผๅบฆๆฏ่พๆถ๏ผไธคไธชๅญ็ฌฆไธฒไบคๆขไธไธไฝ็ฝฎ๏ผๅพๅบๆฅ็ๆๆฌ็ธไผผๅบฆไธไธๆ ท
**Code to reproduce the issue**

```python
```
**Describe the current behavior**
็นๅป่ฟ่กๅ๏ผๅ็ฐๆๆฌ็ธไผผๅบฆ็ๅผไธๅ
**Expected behavior**
ไบคๆขๅญ็ฌฆไธฒ็ไฝ็ฝฎ๏ผๅบ่ฏฅๅพๅฐ็ๆๆฌ็ธไผผๅบฆๆฏไธๆ ท็ใ
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- Python version:
- HanLP version:
**Other info / logs**
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
* [x] I've completed this form and searched the web for solutions.
<!-- โฌ๏ธๆญคๅคๅกๅฟ
ๅพ้๏ผๅฆๅไฝ ็issueไผ่ขซๆบๅจไบบ่ชๅจๅ ้ค๏ผ -->
<!-- โฌ๏ธๆญคๅคๅกๅฟ
ๅพ้๏ผๅฆๅไฝ ็issueไผ่ขซๆบๅจไบบ่ชๅจๅ ้ค๏ผ -->
<!-- โฌ๏ธๆญคๅคๅกๅฟ
ๅพ้๏ผๅฆๅไฝ ็issueไผ่ขซๆบๅจไบบ่ชๅจๅ ้ค๏ผ --> | 1medium
|
Title: [PR] Fix an issue with mistakenly added Bearer auth in addition to Basic auth
Body: > <a href="https://github.com/nolar"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> A pull request by [nolar](https://github.com/nolar) at _2019-11-19 22:31:04+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/pull/243
> Merged by [nolar](https://github.com/nolar) at _2019-11-19 23:55:13+00:00_
> Issue : #242
## Description
`Authorization: Bearer` header was sent (without a token!) because there was a schema defined by default (`"Bearer"`), which should not be there (`None`).
This caused problems when Basic auth (username+password) was used โ it could not co-exist with `Authorization: Bearer` header.
## Types of Changes
- Bug fix (non-breaking change which fixes an issue)
---
> <a href="https://github.com/dneuhaeuser-zalando"><img align="left" height="30" src="https://avatars2.githubusercontent.com/u/37899626?v=4"></a> Commented by [dneuhaeuser-zalando](https://github.com/dneuhaeuser-zalando) at _2019-11-19 23:12:50+00:00_
>
Approved because I understand it's somewhat urgent but ideally this should have a test, to ensure this fixes the problem and to prevent a regression.
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2019-11-20 01:01:26+00:00_
>
The tests for this case are added post-factum in #244 (with other auth/ssl tests). | 1medium
|
Title: [core] Guard ray C++ code quality via unit test
Body: ### Description
Ray core C++ components are not properly unit tested:
- As people left, it's less confident to guard against improper code change with missing context;
- Sanitizer on CI is only triggered on unit test;
- Unit test coverage is a good indicator of code quality (i.e. 85% branch coverage).
### Use case
_No response_ | 1medium
|
Title: Namespace expect without model
Body: @ns.expect(someModel)
def get(self):
pass
Instead of having a model passed to expect decorator, can I have a custom JSON? No model required in the application. | 1medium
|
Title: `Card` component's `ma-{margin}` class takes precedence to `classes`
Body: The view below does not seem to respect my CSS class, at least not my `margin-bottom: ... !important` property.
I see that `Card` prepends `ma-{margin}` to the class order. On inspection, I see that `ma-0` is applied as `.v-application .ma-0`, which applies a `margin: # !important` property.
Two things:
1. Does `v-application` overrides precedence somehow? There are several `v-application` nested classes throughout. I take it this is `vuetify`? Is their higher precedence by design?
2. The issue really is the `!important` flag on the `ma` class. It effectively blocks any user styles. Can this be modified?
```python
with Card(
margin=0,
classes=["container node-card"],
):
...
``` | 1medium
|
Title: pre.py ๆน่ฟๅปบ่ฎฎ
Body: **Summary[้ฎ้ข็ฎ่ฟฐ๏ผไธๅฅ่ฏ๏ผ]**
ไฝฟ็จ `pre.py` ๆถๅฆไฝๆๅไปฅๅๅผๅง
**Env & To Reproduce[ๅค็ฐไธ็ฏๅข]**
ๅไพ่ต็ฏๅขๆญฃๅธธ
ไฝฟ็จ `pre.py` ๅผๅง่ฎญ็ปๅ๏ผๆ ๆณๅๆญข
ๆ `Ctrl + C` ๅๆฅ้๏ผไฝ่ฎก็ฎๆชๅๆญข
็ไบไธไธ `pre.py` ๆบ็ ๏ผๅบ่ฏฅๆฏ `multiprocessing` ็้ฎ้ข๏ผๅ
ถ่ฟ็จๅฎไพ้ไบบไธบๅๆญข
ๅฏไปฅ่่ไฝฟ็จ ๏ผ
`p.terminate()`
`p.join()`
| 1medium
|
Title: [CG, Core] Add Ascend NPU Support for RCCL and CG
Body: ### Description
This RFC proposes to provide initial support for RCCL and CG on Ascend NPU.
Original work by [@Bye-legumes](https://github.com/ray-project/ray/pull/47658) and [@hipudding](https://github.com/ray-project/ray/pull/51032).
However, we need to decouple them into several PRs with minor modifications and set an example for further hardware support.
## Notes:
- I previously submitted a PR in September 2024 to support HCCL and refactor NCCL into a communicator, but the feedback was that it was too large and complicated and we should decouple into some PR with minor modification.
- We should avoid adding additional C code into Ray, as that would influence the build stage.
## Plan for Decoupling into Several Stages:
### **PR1: Support RCCL on NPU**
Ray Core supports scheduling on Ascend NPU devices, but the Ray Collective API does not yet support communication between NPUs using HCCL.
๐ [PR #50790](https://github.com/ray-project/ray/pull/50790)
๐ค @liuxsh9
### **PR2: Refactor CG to Support Multiple Devices**
We can refer to [this PR](https://github.com/ray-project/ray/pull/44086) to decouple device-related modules.
Move cupy dependency, support rank mapping or different progress group.
๐ค @hipudding
### **PR3: CG Support for NPU**
CG support will be added after RCCL is merged, utilizing the RCCL API from [PR #47658](https://github.com/ray-project/ray/pull/47658).
๐ค @Bye-legumes
### **Merge Strategy**
- PR2 and PR3 can be merged independently.
- PR3 will adjust accordingly based on PR2.
### CANN+torch Version
Based on vLLM or latest?
### Use case
Support vllm-ascend https://github.com/vllm-project/vllm-ascend | 2hard
|
Title: Field names which begins and ends with underscores being prefixed with `field`
Body: **Describe the bug**
There was a PR some time ago: https://github.com/koxudaxi/datamodel-code-generator/pull/962
It restricts usage of protected and private variables, but it doesn't consider variables with double-underscores on both sides, e.g. `__version__`.
Such variables are supported by pydantic and you can access them without any problems.
**To Reproduce**
Example schema:
```json
{
"title": "Event",
"properties": {
"__version__": {
"type": "string",
"title": "Event version"
}
}
}
```
Used commandline:
```
$ datamodel-codegen --input event.json --output event.py
```
** Actual behavior **
```python
class Event(BaseModel):
field__version__: Optional[str] = Field(
None, alias='__version__', title='Event version'
)
```
**Expected behavior**
```python
class Event(BaseModel):
__version__: Optional[str] = Field(
None, alias='__version__', title='Event version'
)
```
**Version:**
- OS: MacOS
- Python version: 3.11
- datamodel-code-generator version: 0.21.1
| 1medium
|
Title: Release v0.12.0
Body: Release tracker issue for v0.12.0.
Mostly opening so that it gets issue #3000, which is satisfying. | 3misc
|
Title: Use httpx.Client Directly
Body: As of version 0.6.1, the generated `Client` is somewhat configurable - headers, cookies, and timeout. However, these are all abstractions which have to then be handled explicitly within each generated API method.
Would it be simpler to just make calls using an `httpx.Client` or `httpx.AsyncClient` instance, and allow consumers to configure that directly? Advantages:
- Multiple versions of `httpx` can be supported, and there's less likelihood that you'll have to change your package due to changes or new features in `httpx`.
- It's more efficient than direct calls to `httpx.get` etc, and explicitly what `httpx` recommends in [its documentation](https://www.python-httpx.org/advanced/):
> If you do anything more than experimentation, one-off scripts, or prototypes, then you should use a Client instance.
of course, this package _does_ use the context manager within API operations, but that doesn't allow _multiple calls_ to share the same client and thus connection.
- Everything else good in that documentation, like the ability to use the generated client package as a WSGI test client
- [Event hooks](https://www.python-httpx.org/advanced/#event-hooks) will allow consumers to implement our own global retry logic (like refreshing authentication tokens) prior to official retry support from `httpx` itself.
- `AuthenticatedClient` and `Client` can just each just become an `httpx.Client` configured with different headers.
**tl;dr**: it decreases coupling between the two packages and lets you worry less about the client configuration and how to abstract it. More `httpx` functionality will be directly available to consumers, so you'll get fewer (actionable) feature requests. Future breaking changes here will be less likely. Seems like this alone would allow closing a couple currently pending issues (retries, different auth methods, response mimetypes), by putting them entirely in the hands of the consumer.
**Describe the solution you'd like**
There are a few options.
1. The `httpx.Client` could be used directly (i.e. replace `client.py` entirely). API methods would just accept the client and use it directly, and it would be up to the caller to configure and manage it. This is the simplest for sure, and meets the current use case. This is what I'd recommend.
```python
def sync_detailed(
*,
client: httpx.Client,
json_body: CreateUserRequest,
) -> Response[Union[User, Error]]:
kwargs = _get_kwargs(
client=client,
json_body=json_body,
)
response = client.post(
**kwargs,
)
return _build_response(response=response)
```
2. The `Client` could wrap an `httpx.Client` which allows you to add convenience methods as needed, and stay in control of the `Client` object itself. This abstraction layer offers protected variation, but wouldn't be used for anything right now - headers, timeouts, and cookies can all be configured directly on an `httpx.Client`. _However_ this need could also be met with configuration values passed directly to each API operation.
```python
def sync_detailed(
*,
client: Client,
json_body: CreateUserRequest,
) -> Response[Union[User, Error]]:
kwargs = _get_kwargs(
client=client.httpx_client,
json_body=json_body,
)
response = client.httpx_client.post(
**kwargs,
)
return _build_response(response=response)
```
3. Keep the `Client` and proxy calls (with `__getattr__`) to an inner client, _or_ typecheck `client` on each API operation to see if you've got a `Client` or `httpx.Client`. This allows them to be used interchangeably in API operations. This one's the most fragile and doesn't offer any advantages at the moment.
Of course, this would all apply to `AsyncClient` for the `asyncio` calls.
**Additional context**
Happy to send a PR, can do it pretty quickly. Am looking to use this in production, and would love to stay on (and contribute to) mainline rather than a fork!
| 1medium
|
Title: RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
Body: 
่ฎญ็ปไธๅๅๅบ็ฐ่ฟไธช๏ผ่ฐ่ฝ่งฃๅณhelp | 2hard
|
Title: Move Model documents to different files (MongoEngine) Example
Body: I am trying to figure out how I can create the `MongoEngine.Document` classes in separate files and still use the instance variable here:
https://github.com/biosustain/potion/blob/dc71f4954422f6edfde5bfa86f65dd622a35fdea/examples/mongoengine_simple.py#L12
Is there a good way of doing this so I can create a connection to the database and pass that mongo engine object around when i define my class modules in separate files?
| 1medium
|
Title: Refactor tests
Body: Test files starts to be too dense.
Refactor to split into more files.
| 1medium
|
Title: Apparent leaks between tests with (customised) qapp
Body: The [pytest-qt documentation](https://pytest-qt.readthedocs.io/en/latest/qapplication.html#testing-custom-qapplications) explains how to create a `QApplication` subclass from your own project which will then take the place of the default fixture `qapp` used to make a default `QApplication`. It tells you to put that in the conftest.py file in the relevant testing directory, and to give it "session" scope. From my experience any other scope causes horrendous crashes.
But what this means is that this fixture is only run once in your whole test session. `qapp` appears to be a strange beast, because you can add attributes to it, get your application code to change these attributes, etc. So... it's kind of half an object and half a function (which is only called once).
Dealing with the above wouldn't be that hard: you can prefer methods to attributes (e.g. `MyApp.set_version(...)` rather than `MyApp.version = ...`).
But there's a bigger problem I've just experienced: apparent leaking of patches between tests. This test, which checks that `setWindowTitle` is set on `app.main_window`, passes OK when run on its own:
```
def test_window_title_updated_on_new_doc(request, qapp):
t_logger.info(f'\n>>>>>> test name: {request.node.originalname}')
qapp.main_window = main_window.AutoTransMainWindow()
with unittest.mock.patch.object(qapp.main_window, 'setWindowTitle') as mock_set_wt:
qapp.try_to_create_new_doc()
mock_set_wt.assert_called_once()
```
... but there is another method before this:
```
@pytest.mark.parametrize('close_result', [True, False])
def test_try_to_create_new_doc_returns_expected_result(request, close_result, qapp):
t_logger.info(f'\n>>>>>> test name: {request.node.originalname}, close_result {close_result}')
with unittest.mock.patch.object(qapp, 'main_window'):
qapp.open_document = project.Project()
with unittest.mock.patch.object(qapp, 'try_to_close_curr_doc') as mock_try:
mock_try.return_value = close_result
create_result = qapp.try_to_create_new_doc()
assert close_result == create_result
```
... this tests that `app.try_to_create_new_doc` returns the same boolean value as `try_to_close_curr_doc`. This method passes with `close_result` as both `True` and `False`.
When both tests are run in the same `pytest` command, however, I get the following error on the *second* test (i.e. `test_window_updated_on_new_doc`):
```
E AssertionError: Expected 'setWindowTitle' to have been called once. Called 2 times.
E Calls: [call('Auto_trans 0.0.1 - No projects open'),
E call('Auto_trans 0.0.1 - Project: Not yet saved')].
```
These calls happened during the *first* test, i.e. `test_try_to_create_new_doc_returns_expected_result`, something which I've been able to verify, but they get reported as fails during the *second* test!
Does anyone know what to do about this?
| 1medium
|
Title: No module named 'textblob'
Body: Hi there,
I am a starter of Python and I would like to use 'textblob'. I am a MacOS High Sierra user.
What I tried is to install textblob on a new anaconda environment by `conda install -c conda-forge textblob` and `conda install -c conda-forge/label/gcc7 textblob`. It gets installed and then I check on the conda list and textblob is there. However, when I am running `from textblob import TextBlob` on Python I get an error: **No module named 'textblob'**
How can I resolve this? Thank you in advance | 0easy
|
Title: Wan2.1 result is black, when using --use-sage-attention and setting weight_dtype to fp8_e4m3fn.
Body: When using --use-sage-attention and setting weight_dtype to fp8_e4m3fn, the result is black,
Using --use-sage-attention, --force-upcast-attention and setting weight_dtype to fp8_e4m3fn, the result is still black. | 2hard
|
Title: [Bug] xtts OrderedVocab problem
Body: ### Describe the bug
> TRAINING (2023-10-28 18:37:37)
The OrderedVocab you are attempting to save contains a hole for index 5024, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5025, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5024, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5025, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5024, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5025, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5024, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5025, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5024, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5025, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5024, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5025, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5024, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5025, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5024, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5025, your vocabulary could be corrupted !
### To Reproduce
training xtts with standard recipe
### Expected behavior
_No response_
### Logs
```shell
python finetunextts.py
>> DVAE weights restored from: C:\Users\someone\Desktop\xtts/run\training\XTTS_v1.1_original_model_files/dvae.pth
| > Found 489 files in C:\Users\someone\Desktop\xtts
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
> Training Environment:
| > Current device: 0
| > Num. of GPUs: 1
| > Num. of CPUs: 12
| > Num. of Torch Threads: 1
| > Torch seed: 1
| > Torch CUDNN: True
| > Torch CUDNN deterministic: False
| > Torch CUDNN benchmark: False
> Start Tensorboard: tensorboard --logdir=C:\Users\someone\Desktop\xtts/run\training\GPT_XTTS_LJSpeech_FT-October-27-2023_10+56PM-0000000
> Model has 543985103 parameters
> EPOCH: 0/1000
--> C:\Users\someone\Desktop\xtts/run\training\GPT_XTTS_LJSpeech_FT-October-27-2023_10+56PM-0000000
> Filtering invalid eval samples!!
> Total eval samples after filtering: 4
> EVALUATION
| > Synthesizing test sentences.
--> EVAL PERFORMANCE
| > avg_loader_time: 0.01900 (+0.00000)
| > avg_loss_text_ce: 0.04067 (+0.00000)
| > avg_loss_mel_ce: 4.33739 (+0.00000)
| > avg_loss: 4.37806 (+0.00000)
> BEST MODEL : C:\Users\someone\Desktop\xtts/run\training\GPT_XTTS_LJSpeech_FT-October-27-2023_10+56PM-0000000\best_model_0.pth
> EPOCH: 1/1000
--> C:\Users\someone\Desktop\xtts/run\training\GPT_XTTS_LJSpeech_FT-October-27-2023_10+56PM-0000000
> Sampling by language: dict_keys(['en'])
> TRAINING (2023-10-27 22:57:08)
The OrderedVocab you are attempting to save contains a hole for index 5024, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5025, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5024, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5025, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5024, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5025, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5024, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5025, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5024, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5025, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5024, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5025, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5024, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5025, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5024, your vocabulary could be corrupted !
The OrderedVocab you are attempting to save contains a hole for index 5025, your vocabulary could be corrupted !
--> STEP: 0/243 -- GLOBAL_STEP: 0
| > loss_text_ce: 0.04536 (0.04536)
| > loss_mel_ce: 4.79820 (4.79820)
| > loss: 4.84356 (4.84356)
| > current_lr: 0.00001
| > step_time: 0.88430 (0.88431)
| > loader_time: 70.14840 (70.14841)
--> STEP: 50/243 -- GLOBAL_STEP: 50
| > loss_text_ce: 0.04994 (0.04525)
| > loss_mel_ce: 5.39171 (4.74854)
| > loss: 5.44165 (4.79379)
| > current_lr: 0.00001
| > step_time: 0.66870 (1.54556)
| > loader_time: 0.01600 (0.01624)
--> STEP: 100/243 -- GLOBAL_STEP: 100
| > loss_text_ce: 0.04045 (0.04345)
| > loss_mel_ce: 3.84910 (4.67366)
| > loss: 3.88955 (4.71711)
| > current_lr: 0.00001
| > step_time: 1.74700 (1.66512)
| > loader_time: 0.01520 (0.01434)
--> STEP: 150/243 -- GLOBAL_STEP: 150
| > loss_text_ce: 0.05477 (0.04379)
| > loss_mel_ce: 5.39814 (4.72587)
| > loss: 5.45292 (4.76966)
| > current_lr: 0.00001
| > step_time: 2.80970 (1.85835)
| > loader_time: 0.01400 (0.01352)
--> STEP: 200/243 -- GLOBAL_STEP: 200
| > loss_text_ce: 0.03867 (0.04367)
| > loss_mel_ce: 4.21473 (4.71702)
| > loss: 4.25340 (4.76068)
| > current_lr: 0.00001
| > step_time: 3.30200 (2.20536)
| > loader_time: 0.00500 (0.01207)
> Filtering invalid eval samples!!
> Total eval samples after filtering: 4
> EVALUATION
| > Synthesizing test sentences.
--> EVAL PERFORMANCE
| > avg_loader_time: 0.01202 (-0.00698)
| > avg_loss_text_ce: 0.03961 (-0.00106)
| > avg_loss_mel_ce: 4.15599 (-0.18140)
| > avg_loss: 4.19560 (-0.18246)
> BEST MODEL : C:\Users\someone\Desktop\xtts/run\training\GPT_XTTS_LJSpeech_FT-October-27-2023_10+56PM-0000000\best_model_243.pth
```
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA GeForce RTX 3060"
],
"available": true,
"version": "11.7"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.0.1",
"TTS": "0.19.0",
"numpy": "1.22.0"
},
"System": {
"OS": "Windows",
"architecture": [
"64bit",
"WindowsPE"
],
"processor": "AMD64 Family 25 Model 80 Stepping 0, AuthenticAMD",
"python": "3.9.13",
"version": "10.0.22621"
}
}
```
### Additional context
_No response_ | 1medium
|
Title: Error in exe file which made by Pyinstaller
Body: ### System Info
python = 3.11.7
pandasai = 1.15.8
openai = 1.10.0
I made executable file by pyinstaller using following code.
=============================================================================
import pandas as pd
from pandasai import SmartDataframe
from pandasai.llm import OpenAI
llm = OpenAI(api_token="", model = 'gpt-4')
df = pd.DataFrame({
"country": ["United States", "United Kingdom", "France", "Germany", "Italy", "Spain", "Canada", "Australia", "Japan", "China"],
"gdp": [19294482071552, 2891615567872, 2411255037952, 3435817336832, 1745433788416, 1181205135360, 1607402389504, 1490967855104, 4380756541440, 14631844184064],
"happiness_index": [6.94, 7.16, 6.66, 7.07, 6.38, 6.4, 7.23, 7.22, 5.87, 5.12]
})
df = SmartDataframe(df, config={"llm": llm})
print(df.chat('Which are the 5 happiest countries?'))
============================================================================
### ๐ Describe the bug
If I run this exe file, I got following error
===========================================================================
Unfortunately, I was not able to answer your question, because of the following error:
'help'
=========================================================================== | 1medium
|
Title: [BUG] AttributeError: 'method' object has no attribute '_ninja_operation'
Body: **Describe the bug**
When I'm trying to create a class-based router using the ApiRouter class as a base class, I receive this error at the time of self.add_api_operations:
```
view_func._ninja_operation = operation # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'method' object has no attribute '_ninja_operation'
```
When I comment out this line in the source code for django-ninja my project works absolutely correctly.
Code snippet:
```
class TestRouter(Router):
def __init__(self: Self) -> None:
super().__init__()
self.tags = ["Router"]
self.add_api_operation(
methods=["POST"],
path="/asd",
view_func=self.hello,
)
def hello(self: Self, request: WSGIRequest) -> str:
return "ok"
```
**Versions (please complete the following information):**
- Python version: 3.12.3
Note you can quickly get this by runninng in `./manage.py shell` this line:
```
>>> import django; import pydantic; import ninja; django.__version__; ninja.__version__; pydantic.__version__
'5.1.3'
'1.3.0'
'2.10.4'
```
| 1medium
|
Title: Make media codecs optional
Body: For some use-cases, I think that media codecs are not required. For example, I am just interested in data channels.
Would you accept a PR that moves `av` to [extra_require](https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies) and make the `mediastreams` module optional? | 1medium
|
Title: BUG: GUI silently crashes if `classes.txt` is not found
Body: When a folder is opened in `labelImg` GUI that doesn't have `classes.txt`, the GUI silently crashes without showing any error popup.
### Steps to Reproduce
- Put some images and corresponding annotation text files in a test folder.
- DON'T create `classes.txt`.
- Start `labelImg` GUI, and open the test folder using *Open Directory*.
- `labelImg` tries to read `classes.txt` in the test folder, and prints `FileNotFound` error to the console.
- **No error popup is shown in the GUI** and the program crashes after a few moments.
### Environment
- **OS:** Windows 11
- **PyQt version:** 5.5.19
- **Python version:** 3.11
| 1medium
|
Title: np.fromfile not supported
Body: How to do np.fromfile to use it like np.load
```python
def xnumpy_fromfile(filepath_or_buffer, *args, download_config: Optional[DownloadConfig] = None, **kwargs):
import numpy as np
if hasattr(filepath_or_buffer, "read"):
return np.fromfile(filepath_or_buffer, *args, **kwargs)
else:
filepath_or_buffer = str(filepath_or_buffer)
return np.fromfile(xopen(filepath_or_buffer, "rb", download_config=download_config).read(), *args, **kwargs)
```
this is not work
| 1medium
|
Title: Search filter for Djoser auth/users view ?
Body: Hi,
Is there a way to add a search filter (https://www.django-rest-framework.org/api-guide/filtering/#searchfilter) to the `auth/users/` GET endpoint of Djoser ?
I would like to add a username filter without having to use an extra endpoint.
Would it make sense to create a pull request to add a setting to specify some custom filters on the views ? | 1medium
|
Title: How to unit test the application without a create_app function due to known bug with socketio
Body: Hello, I'm struggling to unit test my application because I don't have a create_app() function which I think I need for the unit tests. I heard it was a known bug with socketio that you can't use a create_app() function and then use flask run. How do you unit test an application otherwise? Or is the bug fixed perchance?
My app code is as follows:
```
#!/usr/bin/python3
# maybe delete above line
# app.py
from flask import Flask, session
from flask_sqlalchemy import SQLAlchemy
from flask_login import LoginManager
import configparser
from flask_socketio import SocketIO, emit, send, join_room, leave_room
config = configparser.ConfigParser()
config.read("../settings.conf")
app = Flask(__name__)
# Ignores slashes on the end of URLs.
app.url_map.strict_slashes = False
app.config['SECRET_KEY'] = config.get('SQLALCHEMY','secret_key')
app.config['SQLALCHEMY_DATABASE_URI'] = config.get('SQLALCHEMY','sqlalchemy_database_uri')
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
# init SQLAlchemy so we can use it later
db = SQLAlchemy(app)
socketio = SocketIO()
socketio.init_app(app)
login_manager = LoginManager()
login_manager.login_view = 'auth.login'
login_manager.init_app(app)
import models
@login_manager.user_loader
def load_user(user_id):
return models.User.query.get(int(user_id))
# blueprint for auth routes in our app
from controllers.auth import auth as auth_blueprint
app.register_blueprint(auth_blueprint)
# blueprint for non-auth parts of app
from controllers.main import main as main_blueprint
app.register_blueprint(main_blueprint)
# blueprint for chase_the_ace parts of app
from controllers.games.chase_the_ace import chase_the_ace as chase_the_ace_blueprint
app.register_blueprint(chase_the_ace_blueprint)
# blueprint for shed parts of app
from controllers.games.shed import shed as shed_blueprint
app.register_blueprint(shed_blueprint)
# Game sockets import game mechanics and socketio listeners.
from controllers.games import chase_the_ace_gameplay
@socketio.on('connect')
def handle_my_connect_event():
print('connected')
@socketio.on('disconnect')
def handle_my_disconnect_event():
print('disconnected')
# If running app.py, then run app itself.
if __name__ == '__main__':
socketio.run(app)
```
Models:
```# models.py
from flask_login import UserMixin
from app import db
class User(UserMixin, db.Model):
id = db.Column(db.Integer, primary_key = True)
email = db.Column(db.String(100), unique = True)
username = db.Column(db.String(50), unique = True)
password = db.Column(db.String(50))
firstName = db.Column(db.String(50))
lastName = db.Column(db.String(50))
chaseTheAceWins = db.Column(db.Integer)
class Player(db.Model):
id = db.Column(db.Integer, primary_key = True)
userId = db.Column(db.Integer)
roomId = db.Column(db.Integer)
generatedPlayerId = db.Column(db.String(100), unique = True)
name = db.Column(db.String(100))
card = db.Column(db.String(10))
lives = db.Column(db.Integer)
outOfGame = db.Column(db.Boolean)
class Room(db.Model):
id = db.Column(db.Integer, primary_key = True)
roomId = db.Column(db.Integer, unique = True)
gameType = db.Column(db.String(20))
hostPlayerId = db.Column(db.String(100))
currentPlayerId = db.Column(db.String(100))
dealerPlayerId = db.Column(db.String(100))
locked = db.Column(db.Boolean)
```
Is there anything in my app that I'm doing wrong? | 1medium
|
Title: Browser opens but no further actions
Body: Just opens the browser and sits there with both Firefox and Chrome
Browsers - Firefox ESR & Chromium 68.0.3440.75
This is in the geckodriver.log
[Child 2489] ###!!! ABORT: Aborting on channel error.: file /build/firefox-esr-TVuMhV/firefox-esr-52.9.0esr/ipc/glue/MessageChannel.cpp, line 2152 | 1medium
|
Title: `torch.device.__enter__` does not affect `get_default_device` despite taking precedence over `set_default_device`
Body: ### ๐ Describe the bug
Using a `torch.device` as a context manager takes precedence over `set_default_device`, but this isn't reflected by the return value of `get_default_device`.
```python
import torch
import torch.utils._device
torch.set_default_device("cuda:1")
with torch.device("cuda:0"):
print(f"get_default_device(): {torch.get_default_device()}")
print(f"CURRENT_DEVICE: {torch.utils._device.CURRENT_DEVICE}")
print(f"actual current device: {torch.tensor(()).device}")
```
```
get_default_device(): cuda:1
CURRENT_DEVICE: cuda:1
actual current device: cuda:0
```
I feel like calling `__enter__` on the `DeviceContext` created in `torch.device`'s C++ `__enter__` implementation and `__exit__` in the C++ `__exit__` implementation might be a solution.
https://github.com/pytorch/pytorch/blob/00199acdb85a4355612bff28e1018b035e0e46b9/torch/csrc/Device.cpp#L179-L197
https://github.com/pytorch/pytorch/blob/00199acdb85a4355612bff28e1018b035e0e46b9/torch/utils/_device.py#L100-L104
https://github.com/pytorch/pytorch/blob/00199acdb85a4355612bff28e1018b035e0e46b9/torch/__init__.py#L1134-L1147
cc: @ezyang
### Versions
torch==2.6.0
cc @albanD | 1medium
|
Title: [FR] Update Anthropic tracing to handle thinking blocks for claude-3.7-sonnet
Body: ### Willingness to contribute
Yes. I can contribute this feature independently.
### Proposal Summary
The current MLflow integration for Anthropic doesn't properly handle the new "thinking" feature in Claude Sonnet. When thinking is enabled, Claude returns content with specialized ThinkingBlock and TextBlock objects, but these aren't correctly processed in the message conversion function. As a result, the chat messages aren't properly captured during MLflow tracing, leading to incomplete traces (missing "chat" tab).
<img width="880" alt="Image" src="https://github.com/user-attachments/assets/a203b644-6f7d-403f-8023-365a145b7a50" />
I propose updating the implementation to check for the `thinking` block type and filter it out in the `convert_message_to_mlflow_chat` function, which restores the chat tab. The thinking contents are still captured in the inputs/outputs tab.
A more comprehensive handling of this issue would involve adding a `thinking` type to the chat section and would likely involve updates to multiple providers that now support thinking.
### Motivation
> #### What is the use case for this feature?
Working with the new `claude-3-7-sonnet-20250219` model with thinking enabled.
> #### Why is this use case valuable to support for MLflow users in general?
`claude-3-7-sonnet-20250219` is the latest and most capable claude model and will likely see substantial usage.
> #### Why is this use case valuable to support for your project(s) or organization?
this will address a limitation in the tracing handling of `claude-3-7-sonnet-20250219`
> #### Why is it currently difficult to achieve this use case?
(see above)
### Details
Proposed simple fix here:
https://github.com/mlflow/mlflow/blob/9cf17478518f632004ae062e87224fea0f704b45/mlflow/anthropic/chat.py#L50
```python
for content_block in content:
# Skip ThinkingBlock objects
if hasattr(content_block, "type") and getattr(content_block, "type") == "thinking":
continue
# Handle TextBlock objects directly
if hasattr(content_block, "type") and getattr(content_block, "type") == "text":
if hasattr(content_block, "text"):
contents.append(TextContentPart(text=getattr(content_block, "text"), type="text"))
continue
```
### What component(s) does this bug affect?
- [ ] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [ ] `area/deployments`: MLflow Deployments client APIs, server, and third-party Deployments integrations
- [ ] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [x] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs
- [ ] `area/server-infra`: MLflow Tracking server backend
- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging
### What interface(s) does this bug affect?
- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
### What language(s) does this bug affect?
- [ ] `language/r`: R APIs and clients
- [ ] `language/java`: Java APIs and clients
- [ ] `language/new`: Proposals for new client languages
### What integration(s) does this bug affect?
- [ ] `integrations/azure`: Azure and Azure ML integrations
- [ ] `integrations/sagemaker`: SageMaker integrations
- [ ] `integrations/databricks`: Databricks integrations | 1medium
|
Title: Deprecation of the Python client
Body: Hello everyone, it's been long time coming but I'm officially stopping development of the Prisma Python Client.
This is for a couple of reasons:
- I originally built the client just for fun while I was a student, nowadays I don't have enough free time to properly maintain it.
- Prisma are rewriting their [core from Rust to TypeScript](https://www.prisma.io/blog/from-rust-to-typescript-a-new-chapter-for-prisma-orm). Unfortunately, adapting Prisma Client Python to this new architecture would require a ground up rewrite of our internals with significantly increased complexity as we would have to provide our own query interpreters and database drivers which is not something I'm interested in working on.
While it's certainly not impossible for community clients to exist in this new world, it is a *lot* more work. The [Go](https://github.com/steebchen/prisma-client-go/issues/1542), [Rust](https://github.com/Brendonovich/prisma-client-rust/discussions/476), and [Dart](https://github.com/medz/prisma-dart/issues/471) clients have similarly all been deprecated.
I greatly appreciate everyone who has supported the project over these last few years. | 3misc
|
Title: Getting duplicate logs with t2t_trainer,t2t_decoder,t2t_eval
Body: I am getting duplicate logs for each t2t command. How can I avoid that?Like While I run t2t_eval script., It evals on eval dataset and then again starts eval and logs same as previous logs. | 1medium
|
Title: [DOC] We need release notes!
Body: This one is definitely on me. Starting with version 0.18.1, we should start collecting release notes in CHANGELOG.rst. | 0easy
|
Title: [bug] Opacity parameter not working in geemap.deck Layer API
Body: ### Environment Information
Tue Jan 28 16:21:45 2025 UTC
--
OS | Linux (Ubuntu 22.04) | CPU(s) | 2 | Machine | x86_64
Architecture | 64bit | RAM | 12.7 GiB | Environment | IPython
Python 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0]
geemap | 0.35.1 | ee | 1.4.6 | ipyleaflet | 0.19.2
folium | 0.19.4 | jupyterlab | Module not found | notebook | 6.5.5
ipyevents | 2.0.2 | geopandas | 1.0.1 | ย | ย
### Description
Trying to draw an EE layer with transparency (partial opacity) using the `geemap.deck` extension module.
The [geemap.deck.Layer.add_ee_layer](https://geemap.org/deck/#geemap.deck.Map.add_ee_layer) methodโs documentation includes an `opacity` keyword argument which should allow setting the layerโs opacity. This is often useful when there is a need for transparency to ensure a new layer doesnโt completely occlude other layers or the base map itself.
However, this argument is currently [ignored in the implementation](https://github.com/gee-community/geemap/blob/824e4e5/geemap/deck.py#L103-L187) which can cause confusion for the user.
### What I Did
As an _undocumented_ workaround, I set the `opacity` within the `vis_params` dictionary explicitly to get the opacity to work.
```python
import geemap.deck as gmd
image_collection = ee.ImageCollection(...)
vis_params = {
"min": -40.0,
"max": 35.0,
"palette": ["blue", "purple", "cyan", "green", "yellow", "red"],
# set within vis parameters instead of the add_ee_layer_kwarg
"opacity": 0.2,
}
view_state = pdk.ViewState(,,,)
m = gmd.Map(initial_view_state=view_state)
# NOTE: opacity kwarg is not recognized. rely on vis_params instead
m.add_ee_layer(image_collection, vis_params=vis_params, ...)
m.show()
```
~It would be a trivial fix to simply do this automatically within `add_ee_layer` to set the `opacity` within the `vis_params` dictionary if this kwarg is not `None`.~
| 1medium
|
Title: Review docs: Feedback
Body: | 0easy
|
Title: "Let's make a giant string!" code example is not representative
Body: `add_string_with_plus()` and `add_string_with_join()` take the same time in the example. It implies that CPython's `+=` optimization is in effect (unrelated to the example in the very next section with a possibly misleading title: ["String concatenation interpreter optimizations"](https://github.com/satwikkansal/wtfpython#string-concatenation-interpreter-optimizations) -- the example is more about string interning, string *literals* than string concatination -- the linked StackOverflow [answer](https://stackoverflow.com/a/24245514/4279) explains it quite well).
The explanation in ["Let's make a giant string!"](https://github.com/satwikkansal/wtfpython#lets-make-a-giant-string) claims *quadratic* behavior for `str + str + str + ...` in Python (correct) but the example `add_string_with_plus()` uses CPython `+= ` optimizations -- the actual times are *linear* on my machine (in theory the worst case is still O(n<sup>2</sup>) -- it depends on `realloc()` being O(n) in the worst case on the given platform -- unlike for Python lists x1.125 overallocation (`add_string_with_join()` is linear) is not used for str):
```
In [2]: %timeit add_string_with_plus(10000)
1000 loops, best of 3: 1.1 ms per loop
In [3]: %timeit add_string_with_format(10000)
1000 loops, best of 3: 539 ยตs per loop
In [4]: %timeit add_string_with_join(10000)
1000 loops, best of 3: 1.1 ms per loop
In [5]: L = ["xyz"]*10000
In [6]: %timeit convert_list_to_string(L, 10000)
10000 loops, best of 3: 118 ยตs per loop
In [7]: %timeit add_string_with_plus(100000)
100 loops, best of 3: 11.9 ms per loop
In [8]: %timeit add_string_with_join(100000)
100 loops, best of 3: 11.8 ms per loop
In [9]: %timeit add_string_with_plus(1000000)
10 loops, best of 3: 121 ms per loop
In [10]: %timeit add_string_with_join(1000000)
10 loops, best of 3: 116 ms per loop
```
Increasing `iters` x10, increases the time x10 -- *linear* behavior.
If you try the same code with `bytes` on Python 3; you get *quadratic* behavior (increasing x10 leads to x100 time) -- no optimization:
```
In [11]: def add_bytes_with_plus(n):
...: s = b""
...: for _ in range(n):
...: s += b"abc"
...: assert len(s) == 3*n
...:
In [12]: %timeit add_bytes_with_plus(10000)
100 loops, best of 3: 10.8 ms per loop
In [13]: %timeit add_bytes_with_plus(100000)
1 loop, best of 3: 1.26 s per loop
In [14]: %timeit add_bytes_with_plus(1000000)
1 loop, best of 3: 2min 37s per loop
```
[Here's a detailed explanation in Russian](https://ru.stackoverflow.com/a/710403/23044) (look at the timings, follow the links in the answer). | 1medium
|
Title: Errors with Zarr v3 and da.to_zarr()
Body: I'm having various issues and errors with `da.to_zarr()` using:
```
dask==2025.1.0
zarr==3.0.1
fsspec==2024.12.0
```
```
from skimage import data
import dask.array as da
import zarr
dask_data = da.from_array(data.coins(), chunks=(64, 64))
da.to_zarr(dask_data, "test_dask_to_zarr.zarr", compute=True, storage_options={"chunks": (64, 64)})
# Traceback (most recent call last):
# File "/Users/wmoore/Desktop/python-scripts/zarr_scripts/test_dask_to_zarr.py", line 7, in <module>
# da.to_zarr(dask_data, "test_dask_to_zarr.zarr", compute=True, storage_options={"chunks": (64, 64)})
# File "/Users/wmoore/opt/anaconda3/envs/zarrv3_py312/lib/python3.12/site-packages/dask/array/core.py", line 3891, in to_zarr
# store = zarr.storage.FsspecStore.from_url(
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# File "/Users/wmoore/opt/anaconda3/envs/zarrv3_py312/lib/python3.12/site-packages/zarr/storage/_fsspec.py", line 182, in from_url
# return cls(fs=fs, path=path, read_only=read_only, allowed_exceptions=allowed_exceptions)
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# File "/Users/wmoore/opt/anaconda3/envs/zarrv3_py312/lib/python3.12/site-packages/zarr/storage/_fsspec.py", line 96, in __init__
# raise TypeError("Filesystem needs to support async operations.")
# TypeError: Filesystem needs to support async operations.
```
Trying to use a local store has a different error:
```
store = zarr.storage.LocalStore("test_dask_to_zarr.zarr", read_only=False)
da.to_zarr(dask_data, store, compute=True, storage_options={"chunks": (64, 64)})
# File "/Users/wmoore/Desktop/python-scripts/zarr_scripts/test_dask_to_zarr.py", line 46, in <module>
# da.to_zarr(dask_data, store, compute=True, storage_options={"chunks": (64, 64)})
# File "/Users/wmoore/opt/anaconda3/envs/zarrv3_py312/lib/python3.12/site-packages/dask/array/core.py", line 3891, in to_zarr
# store = zarr.storage.FsspecStore.from_url(
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# File "/Users/wmoore/opt/anaconda3/envs/zarrv3_py312/lib/python3.12/site-packages/zarr/storage/_fsspec.py", line 174, in from_url
# fs, path = url_to_fs(url, **opts)
# ^^^^^^^^^^^^^^^^^^^^^^
# File "/Users/wmoore/opt/anaconda3/envs/zarrv3_py312/lib/python3.12/site-packages/fsspec/core.py", line 403, in url_to_fs
# chain = _un_chain(url, kwargs)
# ^^^^^^^^^^^^^^^^^^^^^^
# File "/Users/wmoore/opt/anaconda3/envs/zarrv3_py312/lib/python3.12/site-packages/fsspec/core.py", line 335, in _un_chain
# if "::" in path:
# ^^^^^^^^^^^^
# TypeError: argument of type 'LocalStore' is not iterable
```
And also tried with FsspecStore:
```
store = zarr.storage.FsspecStore("test_dask_to_zarr.zarr", read_only=False)
da.to_zarr(dask_data, store, compute=True, storage_options={"chunks": (64, 64)})
# Traceback (most recent call last):
# File "/Users/wmoore/Desktop/python-scripts/zarr_scripts/test_dask_to_zarr.py", line 32, in <module>
# store = zarr.storage.FsspecStore("test_dask_to_zarr_v3.zarr", read_only=False)
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# File "/Users/wmoore/opt/anaconda3/envs/zarrv3_py312/lib/python3.12/site-packages/zarr/storage/_fsspec.py", line 95, in __init__
# if not self.fs.async_impl:
# ^^^^^^^^^^^^^^^^^^
# AttributeError: 'str' object has no attribute 'async_impl'
```
Many thanks for your help | 1medium
|
Title: Cite SciPy family of packages and seaborn
Body: The final sentence of your paper states:
> The underlying packages involved (numpy, pandas, scipy, matplotlib, and seaborn) are familiar parts of the core scientific Python ecosystem, and hence very learnable and extensible. missingno works "out of the box" with a variety of data types and formats, and provides an extremely compact API.
The packages numpy, pandas, scipy, matplotlib, and seaborn should be cited. You can use this link to find the appropriate citation methods: https://scipy.org/citing.html (for all but seaborn). | 0easy
|
Title: add security policy
Body: | 3misc
|
Title: Release test microbenchmark.aws failed
Body: Release test **microbenchmark.aws** failed. See https://buildkite.com/ray-project/release/builds/34295#01954658-83ea-482b-b817-7731040b6ee1 for more details.
Managed by OSS Test Policy | 2hard
|
Title: datatable_experiments does not display
Body: Love the boilerplate mate! Keep up with the good work.
I am trying to implement one of the datatables (via import dash_table_experiments ) but they do not seem to work. Take the code from this [example](https://github.com/plotly/dash-recipes/blob/master/dash-datatable-filter.py):
```python
_pages.py_
import dash
import dash_core_components as dcc
import dash_html_components as html
import dash_table_experiments as dt
import pandas as pd
import json
import pandas as pd
import plotly
from .components import Col, Row
page1 = html.Div([
dt.DataTable(
id='datatable',
rows=[
{'x': 1, 'y': 3},
{'x': 2, 'y': 10}
],
columns=['x'],
filterable=True,
filters={
"x": {
"column": {
"sortable": True,
"name": "x",
"filterable": True,
"editable": True,
"width": 673,
"rowType": "filter",
"key": "x",
"left": 673
},
"filterTerm": "2"
}
}
),
html.Div(id='content')
])
```
```python
_callbacks.py_
@app.callback(Output('content', 'children'), [Input('datatable', 'filters')])
def display_filters(filters):
return html.Pre(json.dumps(filters, indent=2))
```
When I run this, I not seem to get any errors but it doesnt display the table as it should. Could you perhaps have a quick look? | 1medium
|
Title: Cannot Import U2NET
Body: I am trying to ```from model import U2NET`` but its not working. Module "model" does not exist. How to fix it? | 1medium
|
Title: Add a .register() option to accept self-signed certificates (no validation)
Body: | 1medium
|
Title: A content in `extra_navbar` is no longer shown after updating to 0.15.0
Body: ### Describe the bug
**context**
A content in `extra_navbar` for `html` in `_config.yml` is no longer shown after updating to 0.15.0
**expectation**
I expected the content to be shown.
**bug**
No error message.
### Reproduce the bug
Update to 0.15.0 and build the book.
### List your environment
Jupyter Book : 0.15.0
External ToC : 0.3.1
MyST-Parser : 0.18.1
MyST-NB : 0.17.1
Sphinx Book Theme : 1.0.0
Jupyter-Cache : 0.5.0
NbClient : 0.5.13 | 1medium
|
Title: MultiLabelField not being indexed correctly with pre-trained transformer
Body: This is probably a user error but I cannot find a jsonl vocab constructor which works correctly with a MultiLabelField (i.e. a multi-label classifier).
I need to set the vocabs `unk` and `pad` token as I'm using a huggingface transformer, and of course, I need to index the labels.
When I use `from_pretrained_transformer` to construct my vocabulary there are two issues, first, when `MultiLabelField.index` is called, the vocab only contains a tokens namespace, no labels. This causes 'index' to crash - oddly `vocab.get_token_index(label, self._label_namespace)` returns 1 (one) for every label despite the namespace not existing, should it not return an error?
vocabulary: {
type: "from_pretrained_transformer",
model_name: "models/transformer",
}
Also inspecting the vocab object I'm seeing
_oov_token:'\<unk\>'
_padding_token:'@@PADDING@@'
So it's failed to infer the padding token. From what I can see the from_pretrained_transformer has no `padding_token` argument?
If I use 'from_instances' it indexes the labels correctly but afaik it's reindexing the original vocab but it's out of alignment.
My model is
vocabulary: {
type: "from_pretrained_transformer",
model_name: "models/transformer",
},
dataset_reader: {
type: "multi_label",
tokenizer: {
type: "pretrained_transformer",
model_name: "models/transformer"
},
token_indexers: {
tokens: {
type: "pretrained_transformer",
model_name: "models/transformer",
namespace: "tokens"
},
},
},
model: {
type: "multi_label",
text_field_embedder: {
token_embedders: {
tokens: {
type: "pretrained_transformer",
model_name: "models/transformer"
}
},
},
seq2vec_encoder: {
type: "bert_pooler",
pretrained_model: "models/transformer",
dropout: 0.1,
},
},
| 2hard
|
Title: Building `horovod-cpu` image failed with cmake errors
Body: **Environment:**
1. Framework: TensorFlow, PyTorch, MXNet
2. Framework version: 2.5.0, 1.8.1, 1.8.0.post0
3. Horovod version: v0.23.0
4. MPI version: 3.0.0
5. CUDA version: None
6. NCCL version: None
7. Python version: 3.7
8. Spark / PySpark version: 3.1.1
9. Ray version: None
10. OS and version: Ubuntu 18.04
11. GCC version: 7.5.0
12. CMake version: 3.10.2
**Checklist:**
1. Did you search issues to find if somebody asked this question before? Yes
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? Yes
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? Yes
**Bug report:**
I was trying to build a horovod-cpu image locally using this [provided Dockerfile](https://github.com/horovod/horovod/blob/v0.23.0/docker/horovod-cpu/Dockerfile) and with command
```
docker build -f docker/horovod-cpu/Dockerfile .
```
however the build failed with the following errors:
```
#22 48.71 running build_ext
#22 48.76 -- Could not find CCache. Consider installing CCache to speed up compilation.
#22 48.90 -- The CXX compiler identification is GNU 7.5.0
#22 48.90 -- Check for working CXX compiler: /usr/bin/c++
#22 49.00 -- Check for working CXX compiler: /usr/bin/c++ -- works
#22 49.00 -- Detecting CXX compiler ABI info
#22 49.10 -- Detecting CXX compiler ABI info - done
#22 49.11 -- Detecting CXX compile features
#22 49.56 -- Detecting CXX compile features - done
#22 49.58 -- Build architecture flags: -mf16c -mavx -mfma
#22 49.58 -- Using command /usr/bin/python
#22 49.97 -- Found MPI_CXX: /usr/local/lib/libmpi.so (found version "3.1")
#22 49.97 -- Found MPI: TRUE (found version "3.1")
#22 49.97 -- Could NOT find NVTX (missing: NVTX_INCLUDE_DIR)
#22 49.97 CMake Error at CMakeLists.txt:265 (add_subdirectory):
#22 49.97 add_subdirectory given source "third_party/gloo" which is not an existing
#22 49.98 directory.
#22 49.98
#22 49.98
#22 49.98 CMake Error at CMakeLists.txt:267 (target_compile_definitions):
#22 49.98 Cannot specify compile definitions for target "gloo" which is not built by
#22 49.98 this project.
#22 49.98
#22 49.98
#22 52.34 Tensorflow_LIBRARIES := -L/usr/local/lib/python3.7/dist-packages/tensorflow -l:libtensorflow_framework.so.2
#22 52.35 -- Found Tensorflow: -L/usr/local/lib/python3.7/dist-packages/tensorflow -l:libtensorflow_framework.so.2 (found suitable version "2.5.0", minimum required is "1.15.0")
#22 53.16 -- Found Pytorch: 1.8.1+cu102 (found suitable version "1.8.1+cu102", minimum required is "1.2.0")
#22 59.99 -- Found Mxnet: /usr/local/lib/python3.7/dist-packages/mxnet/libmxnet.so (found suitable version "1.8.0", minimum required is "1.4.0")
#22 61.13 CMake Error at CMakeLists.txt:327 (file):
#22 61.13 file COPY cannot find "/tmp/pip-req-build-s0z_ufky/third_party/gloo".
#22 61.13
#22 61.13
#22 61.13 CMake Error at CMakeLists.txt:328 (file):
#22 61.13 file failed to open for reading (No such file or directory):
#22 61.13
#22 61.13 /tmp/pip-req-build-s0z_ufky/third_party/compatible_gloo/gloo/CMakeLists.txt
#22 61.13
#22 61.13
#22 61.13 CMake Error at CMakeLists.txt:331 (add_subdirectory):
#22 61.13 The source directory
#22 61.13
#22 61.13 /tmp/pip-req-build-s0z_ufky/third_party/compatible_gloo
#22 61.13
#22 61.13 does not contain a CMakeLists.txt file.
#22 61.13
#22 61.13
#22 61.13 CMake Error at CMakeLists.txt:332 (target_compile_definitions):
#22 61.13 Cannot specify compile definitions for target "compatible_gloo" which is
#22 61.13 not built by this project.
#22 61.13
#22 61.13
#22 61.13 CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
#22 61.13 Please set them or make sure they are set and tested correctly in the CMake files:
#22 61.13 /tmp/pip-req-build-s0z_ufky/horovod/mxnet/TF_FLATBUFFERS_INCLUDE_PATH
#22 61.13 used as include directory in directory /tmp/pip-req-build-s0z_ufky/horovod/mxnet
#22 61.13 /tmp/pip-req-build-s0z_ufky/horovod/tensorflow/TF_FLATBUFFERS_INCLUDE_PATH
#22 61.13 used as include directory in directory /tmp/pip-req-build-s0z_ufky/horovod/tensorflow
#22 61.13 /tmp/pip-req-build-s0z_ufky/horovod/torch/TF_FLATBUFFERS_INCLUDE_PATH
#22 61.13 used as include directory in directory /tmp/pip-req-build-s0z_ufky/horovod/torch
#22 61.13
#22 61.13 -- Configuring incomplete, errors occurred!
#22 61.13 See also "/tmp/pip-req-build-s0z_ufky/build/temp.linux-x86_64-3.7/RelWithDebInfo/CMakeFiles/CMakeOutput.log".
#22 61.14 Traceback (most recent call last):
#22 61.14 File "<string>", line 1, in <module>
#22 61.14 File "/tmp/pip-req-build-s0z_ufky/setup.py", line 211, in <module>
#22 61.14 'horovodrun = horovod.runner.launch:run_commandline'
#22 61.14 File "/usr/local/lib/python3.7/dist-packages/setuptools/__init__.py", line 153, in setup
#22 61.14 return distutils.core.setup(**attrs)
#22 61.14 File "/usr/lib/python3.7/distutils/core.py", line 148, in setup
#22 61.14 dist.run_commands()
#22 61.14 File "/usr/lib/python3.7/distutils/dist.py", line 966, in run_commands
#22 61.14 self.run_command(cmd)
#22 61.14 File "/usr/lib/python3.7/distutils/dist.py", line 985, in run_command
#22 61.14 cmd_obj.run()
#22 61.14 File "/usr/local/lib/python3.7/dist-packages/wheel/bdist_wheel.py", line 299, in run
#22 61.14 self.run_command('build')
#22 61.14 File "/usr/lib/python3.7/distutils/cmd.py", line 313, in run_command
#22 61.14 self.distribution.run_command(command)
#22 61.14 File "/usr/lib/python3.7/distutils/dist.py", line 985, in run_command
#22 61.14 cmd_obj.run()
#22 61.14 File "/usr/lib/python3.7/distutils/command/build.py", line 135, in run
#22 61.14 self.run_command(cmd_name)
#22 61.14 File "/usr/lib/python3.7/distutils/cmd.py", line 313, in run_command
#22 61.14 self.distribution.run_command(command)
#22 61.15 File "/usr/lib/python3.7/distutils/dist.py", line 985, in run_command
#22 61.15 cmd_obj.run()
#22 61.15 File "/usr/local/lib/python3.7/dist-packages/setuptools/command/build_ext.py", line 79, in run
#22 61.15 _build_ext.run(self)
#22 61.15 File "/usr/lib/python3.7/distutils/command/build_ext.py", line 340, in run
#22 61.15 self.build_extensions()
#22 61.15 File "/tmp/pip-req-build-s0z_ufky/setup.py", line 99, in build_extensions
#22 61.15 cwd=cmake_build_dir)
#22 61.15 File "/usr/lib/python3.7/subprocess.py", line 363, in check_call
#22 61.15 raise CalledProcessError(retcode, cmd)
#22 61.15 subprocess.CalledProcessError: Command '['cmake', '/tmp/pip-req-build-s0z_ufky', '-DCMAKE_BUILD_TYPE=RelWithDebInfo', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELWITHDEBINFO=/tmp/pip-req-build-s0z_ufky/build/lib.linux-x86_64-3.7', '-DPYTHON_EXECUTABLE:FILEPATH=/usr/bin/python']' returned non-zero exit status 1.
#22 61.17 Building wheel for horovod (setup.py): finished with status 'error'
#22 61.17 ERROR: Failed building wheel for horovod
#22 61.17 Running setup.py clean for horovod
#22 61.17 Running command /usr/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-s0z_ufky/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-s0z_ufky/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' clean --all
#22 61.43 running clean
#22 61.43 removing 'build/temp.linux-x86_64-3.7' (and everything under it)
#22 61.44 removing 'build/lib.linux-x86_64-3.7' (and everything under it)
#22 61.44 'build/bdist.linux-x86_64' does not exist -- can't clean it
#22 61.44 'build/scripts-3.7' does not exist -- can't clean it
#22 61.44 removing 'build'
#22 61.46 Failed to build horovod
#22 62.05 Installing collected packages: pytz, python-dateutil, pyrsistent, pycparser, importlib-resources, deprecated, redis, pyzmq, pyarrow, psutil, pandas, msgpack, jsonschema, hiredis, filelock, diskcache, dill, cloudpickle, click, cffi, ray, petastorm, horovod, h5py, aioredis
#22 63.76 changing mode of /usr/local/bin/plasma_store to 755
#22 67.56 changing mode of /usr/local/bin/jsonschema to 755
#22 70.60 changing mode of /usr/local/bin/ray to 755
#22 70.60 changing mode of /usr/local/bin/ray-operator to 755
#22 70.60 changing mode of /usr/local/bin/rllib to 755
#22 70.60 changing mode of /usr/local/bin/serve to 755
#22 70.60 changing mode of /usr/local/bin/tune to 755
#22 70.79 changing mode of /usr/local/bin/petastorm-copy-dataset.py to 755
#22 70.79 changing mode of /usr/local/bin/petastorm-generate-metadata.py to 755
#22 70.79 changing mode of /usr/local/bin/petastorm-throughput.py to 755
#22 70.80 Running setup.py install for horovod: started
#22 70.80 Running command /usr/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-s0z_ufky/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-s0z_ufky/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-2urw0_at/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.7/horovod
#22 71.08 running install
#22 71.08 /usr/local/lib/python3.7/dist-packages/setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
#22 71.08 setuptools.SetuptoolsDeprecationWarning,
#22 71.08 running build
#22 71.08 running build_py
#22 71.08 creating build
#22 71.08 creating build/lib.linux-x86_64-3.7
#22 71.08 creating build/lib.linux-x86_64-3.7/horovod
#22 71.08 copying horovod/__init__.py -> build/lib.linux-x86_64-3.7/horovod
#22 71.08 creating build/lib.linux-x86_64-3.7/horovod/spark
#22 71.08 copying horovod/spark/runner.py -> build/lib.linux-x86_64-3.7/horovod/spark
#22 71.08 copying horovod/spark/gloo_run.py -> build/lib.linux-x86_64-3.7/horovod/spark
#22 71.08 copying horovod/spark/conf.py -> build/lib.linux-x86_64-3.7/horovod/spark
#22 71.08 copying horovod/spark/mpi_run.py -> build/lib.linux-x86_64-3.7/horovod/spark
#22 71.09 copying horovod/spark/__init__.py -> build/lib.linux-x86_64-3.7/horovod/spark
#22 71.09 creating build/lib.linux-x86_64-3.7/horovod/keras
#22 71.09 copying horovod/keras/elastic.py -> build/lib.linux-x86_64-3.7/horovod/keras
#22 71.09 copying horovod/keras/callbacks.py -> build/lib.linux-x86_64-3.7/horovod/keras
#22 71.09 copying horovod/keras/__init__.py -> build/lib.linux-x86_64-3.7/horovod/keras
#22 71.09 creating build/lib.linux-x86_64-3.7/horovod/tensorflow
#22 71.09 copying horovod/tensorflow/elastic.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow
#22 71.09 copying horovod/tensorflow/mpi_ops.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow
#22 71.09 copying horovod/tensorflow/sync_batch_norm.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow
#22 71.09 copying horovod/tensorflow/functions.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow
#22 71.09 copying horovod/tensorflow/gradient_aggregation.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow
#22 71.09 copying horovod/tensorflow/__init__.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow
#22 71.09 copying horovod/tensorflow/util.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow
#22 71.09 copying horovod/tensorflow/compression.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow
#22 71.09 copying horovod/tensorflow/gradient_aggregation_eager.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow
#22 71.09 creating build/lib.linux-x86_64-3.7/horovod/data
#22 71.09 copying horovod/data/__init__.py -> build/lib.linux-x86_64-3.7/horovod/data
#22 71.09 copying horovod/data/data_loader_base.py -> build/lib.linux-x86_64-3.7/horovod/data
#22 71.10 creating build/lib.linux-x86_64-3.7/horovod/_keras
#22 71.10 copying horovod/_keras/elastic.py -> build/lib.linux-x86_64-3.7/horovod/_keras
#22 71.10 copying horovod/_keras/callbacks.py -> build/lib.linux-x86_64-3.7/horovod/_keras
#22 71.10 copying horovod/_keras/__init__.py -> build/lib.linux-x86_64-3.7/horovod/_keras
#22 71.10 creating build/lib.linux-x86_64-3.7/horovod/common
#22 71.10 copying horovod/common/elastic.py -> build/lib.linux-x86_64-3.7/horovod/common
#22 71.10 copying horovod/common/basics.py -> build/lib.linux-x86_64-3.7/horovod/common
#22 71.10 copying horovod/common/process_sets.py -> build/lib.linux-x86_64-3.7/horovod/common
#22 71.10 copying horovod/common/__init__.py -> build/lib.linux-x86_64-3.7/horovod/common
#22 71.10 copying horovod/common/exceptions.py -> build/lib.linux-x86_64-3.7/horovod/common
#22 71.10 copying horovod/common/util.py -> build/lib.linux-x86_64-3.7/horovod/common
#22 71.10 creating build/lib.linux-x86_64-3.7/horovod/mxnet
#22 71.10 copying horovod/mxnet/mpi_ops.py -> build/lib.linux-x86_64-3.7/horovod/mxnet
#22 71.10 copying horovod/mxnet/functions.py -> build/lib.linux-x86_64-3.7/horovod/mxnet
#22 71.10 copying horovod/mxnet/__init__.py -> build/lib.linux-x86_64-3.7/horovod/mxnet
#22 71.10 copying horovod/mxnet/compression.py -> build/lib.linux-x86_64-3.7/horovod/mxnet
#22 71.10 creating build/lib.linux-x86_64-3.7/horovod/runner
#22 71.10 copying horovod/runner/task_fn.py -> build/lib.linux-x86_64-3.7/horovod/runner
#22 71.10 copying horovod/runner/launch.py -> build/lib.linux-x86_64-3.7/horovod/runner
#22 71.10 copying horovod/runner/run_task.py -> build/lib.linux-x86_64-3.7/horovod/runner
#22 71.10 copying horovod/runner/gloo_run.py -> build/lib.linux-x86_64-3.7/horovod/runner
#22 71.10 copying horovod/runner/js_run.py -> build/lib.linux-x86_64-3.7/horovod/runner
#22 71.10 copying horovod/runner/mpi_run.py -> build/lib.linux-x86_64-3.7/horovod/runner
#22 71.10 copying horovod/runner/__init__.py -> build/lib.linux-x86_64-3.7/horovod/runner
#22 71.10 creating build/lib.linux-x86_64-3.7/horovod/torch
#22 71.10 copying horovod/torch/optimizer.py -> build/lib.linux-x86_64-3.7/horovod/torch
#22 71.10 copying horovod/torch/mpi_ops.py -> build/lib.linux-x86_64-3.7/horovod/torch
#22 71.10 copying horovod/torch/sync_batch_norm.py -> build/lib.linux-x86_64-3.7/horovod/torch
#22 71.10 copying horovod/torch/functions.py -> build/lib.linux-x86_64-3.7/horovod/torch
#22 71.11 copying horovod/torch/__init__.py -> build/lib.linux-x86_64-3.7/horovod/torch
#22 71.11 copying horovod/torch/compression.py -> build/lib.linux-x86_64-3.7/horovod/torch
#22 71.11 creating build/lib.linux-x86_64-3.7/horovod/ray
#22 71.11 copying horovod/ray/runner.py -> build/lib.linux-x86_64-3.7/horovod/ray
#22 71.11 copying horovod/ray/elastic.py -> build/lib.linux-x86_64-3.7/horovod/ray
#22 71.11 copying horovod/ray/worker.py -> build/lib.linux-x86_64-3.7/horovod/ray
#22 71.11 copying horovod/ray/strategy.py -> build/lib.linux-x86_64-3.7/horovod/ray
#22 71.11 copying horovod/ray/driver_service.py -> build/lib.linux-x86_64-3.7/horovod/ray
#22 71.11 copying horovod/ray/ray_logger.py -> build/lib.linux-x86_64-3.7/horovod/ray
#22 71.11 copying horovod/ray/__init__.py -> build/lib.linux-x86_64-3.7/horovod/ray
#22 71.11 copying horovod/ray/utils.py -> build/lib.linux-x86_64-3.7/horovod/ray
#22 71.11 creating build/lib.linux-x86_64-3.7/horovod/spark/keras
#22 71.11 copying horovod/spark/keras/optimizer.py -> build/lib.linux-x86_64-3.7/horovod/spark/keras
#22 71.11 copying horovod/spark/keras/tensorflow.py -> build/lib.linux-x86_64-3.7/horovod/spark/keras
#22 71.11 copying horovod/spark/keras/__init__.py -> build/lib.linux-x86_64-3.7/horovod/spark/keras
#22 71.11 copying horovod/spark/keras/bare.py -> build/lib.linux-x86_64-3.7/horovod/spark/keras
#22 71.11 copying horovod/spark/keras/remote.py -> build/lib.linux-x86_64-3.7/horovod/spark/keras
#22 71.11 copying horovod/spark/keras/util.py -> build/lib.linux-x86_64-3.7/horovod/spark/keras
#22 71.11 copying horovod/spark/keras/estimator.py -> build/lib.linux-x86_64-3.7/horovod/spark/keras
#22 71.11 creating build/lib.linux-x86_64-3.7/horovod/spark/lightning
#22 71.11 copying horovod/spark/lightning/datamodule.py -> build/lib.linux-x86_64-3.7/horovod/spark/lightning
#22 71.11 copying horovod/spark/lightning/legacy.py -> build/lib.linux-x86_64-3.7/horovod/spark/lightning
#22 71.11 copying horovod/spark/lightning/__init__.py -> build/lib.linux-x86_64-3.7/horovod/spark/lightning
#22 71.11 copying horovod/spark/lightning/remote.py -> build/lib.linux-x86_64-3.7/horovod/spark/lightning
#22 71.11 copying horovod/spark/lightning/util.py -> build/lib.linux-x86_64-3.7/horovod/spark/lightning
#22 71.11 copying horovod/spark/lightning/estimator.py -> build/lib.linux-x86_64-3.7/horovod/spark/lightning
#22 71.11 creating build/lib.linux-x86_64-3.7/horovod/spark/data_loaders
#22 71.11 copying horovod/spark/data_loaders/pytorch_data_loaders.py -> build/lib.linux-x86_64-3.7/horovod/spark/data_loaders
#22 71.11 copying horovod/spark/data_loaders/__init__.py -> build/lib.linux-x86_64-3.7/horovod/spark/data_loaders
#22 71.12 creating build/lib.linux-x86_64-3.7/horovod/spark/task
#22 71.12 copying horovod/spark/task/gloo_exec_fn.py -> build/lib.linux-x86_64-3.7/horovod/spark/task
#22 71.12 copying horovod/spark/task/__init__.py -> build/lib.linux-x86_64-3.7/horovod/spark/task
#22 71.12 copying horovod/spark/task/task_info.py -> build/lib.linux-x86_64-3.7/horovod/spark/task
#22 71.12 copying horovod/spark/task/task_service.py -> build/lib.linux-x86_64-3.7/horovod/spark/task
#22 71.12 copying horovod/spark/task/mpirun_exec_fn.py -> build/lib.linux-x86_64-3.7/horovod/spark/task
#22 71.12 creating build/lib.linux-x86_64-3.7/horovod/spark/driver
#22 71.12 copying horovod/spark/driver/driver_service.py -> build/lib.linux-x86_64-3.7/horovod/spark/driver
#22 71.12 copying horovod/spark/driver/host_discovery.py -> build/lib.linux-x86_64-3.7/horovod/spark/driver
#22 71.12 copying horovod/spark/driver/__init__.py -> build/lib.linux-x86_64-3.7/horovod/spark/driver
#22 71.12 copying horovod/spark/driver/rendezvous.py -> build/lib.linux-x86_64-3.7/horovod/spark/driver
#22 71.12 copying horovod/spark/driver/job_id.py -> build/lib.linux-x86_64-3.7/horovod/spark/driver
#22 71.12 copying horovod/spark/driver/mpirun_rsh.py -> build/lib.linux-x86_64-3.7/horovod/spark/driver
#22 71.12 copying horovod/spark/driver/rsh.py -> build/lib.linux-x86_64-3.7/horovod/spark/driver
#22 71.12 creating build/lib.linux-x86_64-3.7/horovod/spark/common
#22 71.12 copying horovod/spark/common/serialization.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
#22 71.12 copying horovod/spark/common/cache.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
#22 71.12 copying horovod/spark/common/backend.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
#22 71.12 copying horovod/spark/common/_namedtuple_fix.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
#22 71.12 copying horovod/spark/common/constants.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
#22 71.12 copying horovod/spark/common/__init__.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
#22 71.12 copying horovod/spark/common/util.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
#22 71.12 copying horovod/spark/common/params.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
#22 71.13 copying horovod/spark/common/store.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
#22 71.13 copying horovod/spark/common/estimator.py -> build/lib.linux-x86_64-3.7/horovod/spark/common
#22 71.13 creating build/lib.linux-x86_64-3.7/horovod/spark/torch
#22 71.13 copying horovod/spark/torch/__init__.py -> build/lib.linux-x86_64-3.7/horovod/spark/torch
#22 71.13 copying horovod/spark/torch/remote.py -> build/lib.linux-x86_64-3.7/horovod/spark/torch
#22 71.13 copying horovod/spark/torch/util.py -> build/lib.linux-x86_64-3.7/horovod/spark/torch
#22 71.13 copying horovod/spark/torch/estimator.py -> build/lib.linux-x86_64-3.7/horovod/spark/torch
#22 71.13 creating build/lib.linux-x86_64-3.7/horovod/tensorflow/keras
#22 71.13 copying horovod/tensorflow/keras/elastic.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow/keras
#22 71.13 copying horovod/tensorflow/keras/callbacks.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow/keras
#22 71.13 copying horovod/tensorflow/keras/__init__.py -> build/lib.linux-x86_64-3.7/horovod/tensorflow/keras
#22 71.13 creating build/lib.linux-x86_64-3.7/horovod/runner/util
#22 71.13 copying horovod/runner/util/cache.py -> build/lib.linux-x86_64-3.7/horovod/runner/util
#22 71.13 copying horovod/runner/util/threads.py -> build/lib.linux-x86_64-3.7/horovod/runner/util
#22 71.13 copying horovod/runner/util/__init__.py -> build/lib.linux-x86_64-3.7/horovod/runner/util
#22 71.13 copying horovod/runner/util/lsf.py -> build/lib.linux-x86_64-3.7/horovod/runner/util
#22 71.13 copying horovod/runner/util/remote.py -> build/lib.linux-x86_64-3.7/horovod/runner/util
#22 71.13 copying horovod/runner/util/streams.py -> build/lib.linux-x86_64-3.7/horovod/runner/util
#22 71.13 copying horovod/runner/util/network.py -> build/lib.linux-x86_64-3.7/horovod/runner/util
#22 71.13 creating build/lib.linux-x86_64-3.7/horovod/runner/http
#22 71.13 copying horovod/runner/http/http_server.py -> build/lib.linux-x86_64-3.7/horovod/runner/http
#22 71.13 copying horovod/runner/http/__init__.py -> build/lib.linux-x86_64-3.7/horovod/runner/http
#22 71.13 copying horovod/runner/http/http_client.py -> build/lib.linux-x86_64-3.7/horovod/runner/http
#22 71.13 creating build/lib.linux-x86_64-3.7/horovod/runner/task
#22 71.13 copying horovod/runner/task/__init__.py -> build/lib.linux-x86_64-3.7/horovod/runner/task
#22 71.13 copying horovod/runner/task/task_service.py -> build/lib.linux-x86_64-3.7/horovod/runner/task
#22 71.13 creating build/lib.linux-x86_64-3.7/horovod/runner/driver
#22 71.13 copying horovod/runner/driver/driver_service.py -> build/lib.linux-x86_64-3.7/horovod/runner/driver
#22 71.13 copying horovod/runner/driver/__init__.py -> build/lib.linux-x86_64-3.7/horovod/runner/driver
#22 71.13 creating build/lib.linux-x86_64-3.7/horovod/runner/common
#22 71.13 copying horovod/runner/common/__init__.py -> build/lib.linux-x86_64-3.7/horovod/runner/common
#22 71.13 creating build/lib.linux-x86_64-3.7/horovod/runner/elastic
#22 71.14 copying horovod/runner/elastic/worker.py -> build/lib.linux-x86_64-3.7/horovod/runner/elastic
#22 71.14 copying horovod/runner/elastic/driver.py -> build/lib.linux-x86_64-3.7/horovod/runner/elastic
#22 71.14 copying horovod/runner/elastic/registration.py -> build/lib.linux-x86_64-3.7/horovod/runner/elastic
#22 71.14 copying horovod/runner/elastic/constants.py -> build/lib.linux-x86_64-3.7/horovod/runner/elastic
#22 71.14 copying horovod/runner/elastic/settings.py -> build/lib.linux-x86_64-3.7/horovod/runner/elastic
#22 71.14 copying horovod/runner/elastic/__init__.py -> build/lib.linux-x86_64-3.7/horovod/runner/elastic
#22 71.14 copying horovod/runner/elastic/rendezvous.py -> build/lib.linux-x86_64-3.7/horovod/runner/elastic
#22 71.14 copying horovod/runner/elastic/discovery.py -> build/lib.linux-x86_64-3.7/horovod/runner/elastic
#22 71.14 creating build/lib.linux-x86_64-3.7/horovod/runner/common/util
#22 71.14 copying horovod/runner/common/util/secret.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
#22 71.14 copying horovod/runner/common/util/host_hash.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
#22 71.14 copying horovod/runner/common/util/settings.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
#22 71.14 copying horovod/runner/common/util/tiny_shell_exec.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
#22 71.14 copying horovod/runner/common/util/__init__.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
#22 71.14 copying horovod/runner/common/util/config_parser.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
#22 71.14 copying horovod/runner/common/util/env.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
#22 71.14 copying horovod/runner/common/util/hosts.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
#22 71.14 copying horovod/runner/common/util/timeout.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
#22 71.14 copying horovod/runner/common/util/network.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
#22 71.14 copying horovod/runner/common/util/safe_shell_exec.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
#22 71.14 copying horovod/runner/common/util/codec.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/util
#22 71.14 creating build/lib.linux-x86_64-3.7/horovod/runner/common/service
#22 71.14 copying horovod/runner/common/service/driver_service.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/service
#22 71.14 copying horovod/runner/common/service/__init__.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/service
#22 71.14 copying horovod/runner/common/service/task_service.py -> build/lib.linux-x86_64-3.7/horovod/runner/common/service
#22 71.14 creating build/lib.linux-x86_64-3.7/horovod/torch/mpi_lib
#22 71.14 copying horovod/torch/mpi_lib/__init__.py -> build/lib.linux-x86_64-3.7/horovod/torch/mpi_lib
#22 71.14 creating build/lib.linux-x86_64-3.7/horovod/torch/mpi_lib_impl
#22 71.14 copying horovod/torch/mpi_lib_impl/__init__.py -> build/lib.linux-x86_64-3.7/horovod/torch/mpi_lib_impl
#22 71.14 creating build/lib.linux-x86_64-3.7/horovod/torch/elastic
#22 71.14 copying horovod/torch/elastic/state.py -> build/lib.linux-x86_64-3.7/horovod/torch/elastic
#22 71.14 copying horovod/torch/elastic/__init__.py -> build/lib.linux-x86_64-3.7/horovod/torch/elastic
#22 71.15 copying horovod/torch/elastic/sampler.py -> build/lib.linux-x86_64-3.7/horovod/torch/elastic
#22 71.15 running build_ext
#22 71.16 -- Could not find CCache. Consider installing CCache to speed up compilation.
#22 71.23 -- The CXX compiler identification is GNU 7.5.0
#22 71.24 -- Check for working CXX compiler: /usr/bin/c++
#22 71.34 -- Check for working CXX compiler: /usr/bin/c++ -- works
#22 71.34 -- Detecting CXX compiler ABI info
#22 71.43 -- Detecting CXX compiler ABI info - done
#22 71.45 -- Detecting CXX compile features
#22 71.91 -- Detecting CXX compile features - done
#22 71.92 -- Build architecture flags: -mf16c -mavx -mfma
#22 71.92 -- Using command /usr/bin/python
#22 72.31 -- Found MPI_CXX: /usr/local/lib/libmpi.so (found version "3.1")
#22 72.31 -- Found MPI: TRUE (found version "3.1")
#22 72.31 -- Could NOT find NVTX (missing: NVTX_INCLUDE_DIR)
#22 72.31 CMake Error at CMakeLists.txt:265 (add_subdirectory):
#22 72.31 add_subdirectory given source "third_party/gloo" which is not an existing
#22 72.31 directory.
#22 72.31
#22 72.31
#22 72.31 CMake Error at CMakeLists.txt:267 (target_compile_definitions):
#22 72.31 Cannot specify compile definitions for target "gloo" which is not built by
#22 72.32 this project.
#22 72.32
#22 72.32
#22 73.91 Tensorflow_LIBRARIES := -L/usr/local/lib/python3.7/dist-packages/tensorflow -l:libtensorflow_framework.so.2
#22 73.91 -- Found Tensorflow: -L/usr/local/lib/python3.7/dist-packages/tensorflow -l:libtensorflow_framework.so.2 (found suitable version "2.5.0", minimum required is "1.15.0")
#22 74.42 -- Found Pytorch: 1.8.1+cu102 (found suitable version "1.8.1+cu102", minimum required is "1.2.0")
#22 81.17 -- Found Mxnet: /usr/local/lib/python3.7/dist-packages/mxnet/libmxnet.so (found suitable version "1.8.0", minimum required is "1.4.0")
#22 82.47 CMake Error at CMakeLists.txt:327 (file):
#22 82.47 file COPY cannot find "/tmp/pip-req-build-s0z_ufky/third_party/gloo".
#22 82.47
#22 82.47
#22 82.47 CMake Error at CMakeLists.txt:331 (add_subdirectory):
#22 82.47 The source directory
#22 82.47
#22 82.47 /tmp/pip-req-build-s0z_ufky/third_party/compatible_gloo
#22 82.47
#22 82.47 does not contain a CMakeLists.txt file.
#22 82.47
#22 82.47
#22 82.47 CMake Error at CMakeLists.txt:332 (target_compile_definitions):
#22 82.47 Cannot specify compile definitions for target "compatible_gloo" which is
#22 82.47 not built by this project.
#22 82.47
#22 82.47
#22 82.47 CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
#22 82.47 Please set them or make sure they are set and tested correctly in the CMake files:
#22 82.47 /tmp/pip-req-build-s0z_ufky/horovod/mxnet/TF_FLATBUFFERS_INCLUDE_PATH
#22 82.47 used as include directory in directory /tmp/pip-req-build-s0z_ufky/horovod/mxnet
#22 82.47 /tmp/pip-req-build-s0z_ufky/horovod/tensorflow/TF_FLATBUFFERS_INCLUDE_PATH
#22 82.47 used as include directory in directory /tmp/pip-req-build-s0z_ufky/horovod/tensorflow
#22 82.47 /tmp/pip-req-build-s0z_ufky/horovod/torch/TF_FLATBUFFERS_INCLUDE_PATH
#22 82.47 used as include directory in directory /tmp/pip-req-build-s0z_ufky/horovod/torch
#22 82.47
#22 82.47 -- Configuring incomplete, errors occurred!
#22 82.48 See also "/tmp/pip-req-build-s0z_ufky/build/temp.linux-x86_64-3.7/RelWithDebInfo/CMakeFiles/CMakeOutput.log".
#22 82.48 Traceback (most recent call last):
#22 82.48 File "<string>", line 1, in <module>
#22 82.48 File "/tmp/pip-req-build-s0z_ufky/setup.py", line 211, in <module>
#22 82.48 'horovodrun = horovod.runner.launch:run_commandline'
#22 82.48 File "/usr/local/lib/python3.7/dist-packages/setuptools/__init__.py", line 153, in setup
#22 82.48 return distutils.core.setup(**attrs)
#22 82.48 File "/usr/lib/python3.7/distutils/core.py", line 148, in setup
#22 82.48 dist.run_commands()
#22 82.48 File "/usr/lib/python3.7/distutils/dist.py", line 966, in run_commands
#22 82.48 self.run_command(cmd)
#22 82.48 File "/usr/lib/python3.7/distutils/dist.py", line 985, in run_command
#22 82.49 cmd_obj.run()
#22 82.49 File "/usr/local/lib/python3.7/dist-packages/setuptools/command/install.py", line 68, in run
#22 82.49 return orig.install.run(self)
#22 82.49 File "/usr/lib/python3.7/distutils/command/install.py", line 589, in run
#22 82.49 self.run_command('build')
#22 82.49 File "/usr/lib/python3.7/distutils/cmd.py", line 313, in run_command
#22 82.49 self.distribution.run_command(command)
#22 82.49 File "/usr/lib/python3.7/distutils/dist.py", line 985, in run_command
#22 82.49 cmd_obj.run()
#22 82.49 File "/usr/lib/python3.7/distutils/command/build.py", line 135, in run
#22 82.49 self.run_command(cmd_name)
#22 82.49 File "/usr/lib/python3.7/distutils/cmd.py", line 313, in run_command
#22 82.49 self.distribution.run_command(command)
#22 82.49 File "/usr/lib/python3.7/distutils/dist.py", line 985, in run_command
#22 82.49 cmd_obj.run()
#22 82.49 File "/usr/local/lib/python3.7/dist-packages/setuptools/command/build_ext.py", line 79, in run
#22 82.49 _build_ext.run(self)
#22 82.49 File "/usr/lib/python3.7/distutils/command/build_ext.py", line 340, in run
#22 82.49 self.build_extensions()
#22 82.49 File "/tmp/pip-req-build-s0z_ufky/setup.py", line 99, in build_extensions
#22 82.50 cwd=cmake_build_dir)
#22 82.50 File "/usr/lib/python3.7/subprocess.py", line 363, in check_call
#22 82.50 raise CalledProcessError(retcode, cmd)
#22 82.50 subprocess.CalledProcessError: Command '['cmake', '/tmp/pip-req-build-s0z_ufky', '-DCMAKE_BUILD_TYPE=RelWithDebInfo', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELWITHDEBINFO=/tmp/pip-req-build-s0z_ufky/build/lib.linux-x86_64-3.7', '-DPYTHON_EXECUTABLE:FILEPATH=/usr/bin/python']' returned non-zero exit status 1.
#22 82.52 Running setup.py install for horovod: finished with status 'error'
#22 82.52 ERROR: Command errored out with exit status 1: /usr/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-s0z_ufky/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-s0z_ufky/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-2urw0_at/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.7/horovod Check the logs for full command output.
------
executor failed running [/bin/bash -cu python setup.py sdist && bash -c "HOROVOD_WITH_TENSORFLOW=1 HOROVOD_WITH_PYTORCH=1 HOROVOD_WITH_MXNET=1 pip install --no-cache-dir -v $(ls /horovod/dist/horovod-*.tar.gz)[spark,ray]" && horovodrun --check-build]: exit code: 1
```
Thank you. | 2hard
|
Title: [ๆ้ฎ] {่ทๅๅผนๅน็ๆถๅ๏ผ็ฐๅจๆฅ้KeyError: 'total'}
Body: **Python ็ๆฌ๏ผ** 3.12
**ๆจกๅ็ๆฌ๏ผ** x.y.z
**่ฟ่ก็ฏๅข๏ผ** Linux
่ฟไธช่ฟๅจ็ปดๆคๅ
---------------------------------------------------------------------------
# ่ฟๆฏๆ็ไปฃ็
# ่ทๅrankๆ่ก ๅฎๆถๆฅ่ฏขๆฅ่ฏขๅจ็บฟ่ง็ไบบๆฐ ๆถ้ๅผนๅน
import asyncio
from bilibili_api import video
# ๅฎไพๅ
v = video.Video(bvid="BV15EtgeUEaD")
# ่ทๅๅจ็บฟไบบๆฐ
print(sync(v.get_online()))
print(sync(v.get_danmakus())) #ๆญคๅคไผๆฅ้
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[43], line 10
7 # ่ทๅๅจ็บฟไบบๆฐ
8 print(sync(v.get_online()))
---> 10 print(sync(v.get_danmakus()))
File ~/anaconda3/lib/python3.12/site-packages/bilibili_api/utils/sync.py:33, in sync(coroutine)
31 __ensure_event_loop()
32 loop = asyncio.get_event_loop()
---> 33 return loop.run_until_complete(coroutine)
File ~/anaconda3/lib/python3.12/site-packages/nest_asyncio.py:98, in _patch_loop.<locals>.run_until_complete(self, future)
95 if not f.done():
96 raise RuntimeError(
97 'Event loop stopped before Future completed.')
---> 98 return f.result()
File ~/anaconda3/lib/python3.12/asyncio/futures.py:203, in Future.result(self)
201 self.__log_traceback = False
202 if self._exception is not None:
--> 203 raise self._exception.with_traceback(self._exception_tb)
204 return self._result
File ~/anaconda3/lib/python3.12/asyncio/tasks.py:314, in Task.__step_run_and_handle_result(***failed resolving arguments***)
310 try:
311 if exc is None:
312 # We use the `send` method directly, because coroutines
313 # don't have `__iter__` and `__next__` methods.
--> 314 result = coro.send(None)
315 else:
316 result = coro.throw(exc)
File ~/anaconda3/lib/python3.12/site-packages/bilibili_api/video.py:883, in Video.get_danmakus(self, page_index, date, cid, from_seg, to_seg)
881 if to_seg == None:
882 view = await self.get_danmaku_view(cid=cid)
--> 883 to_seg = view["dm_seg"]["total"] - 1
885 danmakus = []
887 for seg in range(from_seg, to_seg + 1):
KeyError: 'total'
Selection deleted
| 1medium
|
Title: RuntimeError During Pytest Collection Because no App Context is Set Up Yet
Body: ## Current Behavior
My application uses the factory method for setting up the application, so I use a pattern similar to the following:
```python
# ./api/__init__.py
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
def create_app():
app = Flask(__name__)
db.init_app(app)
```
When collecting tests, pytest imports the test files as defined by in the pytest settings (in my case, the `pytest.ini` specifies that all `test_*` files in the directory `./tests` are collected). During the collection, the test files are imported before any of the fixtures are set up. I have a file which defines a model which is subclassed from `api.db.Model`. My test makes use of this model by using sqlalchemy to scan the database to evaluate the precondition. Something like this:
```python
from api.models import User
def it_creates_a_user(self, session):
# GIVEN there is no existing user
assert session.query(User).count() == 0
```
So when this file is imported, the `api.models.__init__.py` file is imported which, in turn, imports `api.models.user.User` which has a definition similar to the following:
```python
from api import db
class User(db.Model):
# columns
```
Again, when this import happens, pytest has not yet created the app fixture where I push the app_context, which means flask_sqlalchemy does not know which app the db is bound to and so it raises a `RuntimeError`:
```
RuntimeError: No application found. Either work inside a view function or push an application context. See http://flask-sqlalchemy.pocoo.org/contexts/.
```
This perplexed me greatly at first since I definitely am pushing an application context in my app fixture:
```python
@pytest.fixture(scope="session", autouse=True)
def app():
logger.info("Creating test application")
test_app = create_app()
with test_app.app_context():
yield test_app
```
It wasn't until I thought to run `pytest --continue-on-collection-errors` that I found that the tests all run and pass just fine after the RuntimeError is raised during the collection phase. It was then that it dawned on me what the cause of the issue was. I have worked around this issue by pushing the context in my `tests/__init__.py` file:
```python
# ./tests/__init__.py
from api import create_app
"""
This is a workaround for pytest collection of flask-sqlalchemy models.
When running tests, the first thing that pytest does is to collect the
tests by importing all the test files. If a test file imports a model,
as they will surely do, then the model tries to use the api.db before
the app has been created. Doing this makes flask-sqlalchemy raise a
RuntimeError saying that there is no application found. The following
code only exists to set the app context during test import to avoid
this RuntimeError. The tests will us the app fixture which sets up the
context and so this has no effect on the tests when they are run.
"""
app = create_app()
app.app_context().push()
```
This feels a little dirty and I'm hopeful that there is a way for this issue to be solved.
### Relevant Code
```toml
# pyproject.toml
# Unrelated items
[tool.pytest.ini_options]
minversion = "6.0"
log_auto_indent = true
log_cli = true
log_cli_format = "%(levelname)-5.5s [%(name)s] %(message)s"
testpaths = ["tests"]
python_functions = ["test_*", "it_*"]
```
```python
# api/__init__.py
import os
import shlex
import subprocess
from dotenv import load_dotenv
from flask import Flask
from flask_marshmallow import Marshmallow
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
ma = Marshmallow()
def create_app(test_config=None):
app = Flask(__name__, instance_relative_config=True)
if test_config is None:
app.config.from_object(os.getenv("APP_SETTINGS"))
else:
app.config.from_mapping(test_config)
db.init_app(app)
ma.init_app(app)
# a simple page that says hello
@app.route("/health-check")
def health_check(): # pragma: no cover
cmd = "git describe --always"
current_rev = subprocess.check_output(shlex.split(cmd)).strip()
return current_rev
return app
```
```python
# ./app/models/__init__.py
from .user import User
```
```python
# ./api/models/user.py
from uuid import uuid4
from sqlalchemy_utils import EmailType, UUIDType
from api import db
class User(db.Model):
id = db.Column(UUIDType, primary_key=True, default=uuid4)
email = db.Column(EmailType, nullable=False)
def __repr__(self):
return f"<User: {self.email}>"
```
```python
# ./tests/unit/models/test_user.py
import logging
import pytest
import sqlalchemy
from api.models import User # pytest imports this line before it sets up any fixtures
logger = logging.getLogger(__name__)
class TestUserModel:
class TestNormalCase:
def it_creates_a_user(self, session):
# GIVEN No user exists in our database
assert session.query(User).count() == 0
# WHEN we add a new user
test_user = User(email="test@testing.com")
session.add(test_user)
session.commit()
# THEN the user is persisted in the database
actual_user = session.query(User).get(test_user.id)
assert actual_user == test_user
assert repr(actual_user) == f"<User: {test_user.email}>"
class TestErrorCase:
def it_requires_a_user_email(self, session):
with pytest.raises(sqlalchemy.exc.IntegrityError):
test_user = User()
session.add(test_user)
session.commit()
```
```python
# ./tests/conftest.py
# other stuff here, just showing that I am using the context in my app fixture
@pytest.fixture(scope="session", autouse=True)
def app():
logger.info("Creating test application")
test_app = create_app()
with test_app.app_context():
yield test_app
```
Environment:
- Python version: `3.9.1`
- Flask-SQLAlchemy version: `2.4.4`
- SQLAlchemy version: `1.3.23`
| 1medium
|
Title: Error: PolicyBasedRL
Body: **Describe the issue**:
I tried running the the following models space with PolicyBasedRL and I will also put in the experiment configuration:
#BASELINE NAS USING v2.7
from nni.retiarii.serializer import model_wrapper
import torch.nn.functional as F
import nni.retiarii.nn.pytorch as nn
class Block1(nn.Module):
def __init__(self, layer_size):
super().__init__()
self.conv1 = nn.Conv2d(3, layer_size*2, 3, stride=1,padding=1)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(layer_size*2, layer_size*8, 3, stride=1, padding=1)
def forward(self, x):
x = F.relu(self.conv1(x))
x = self.pool(x)
x = F.relu(self.conv2(x))
x = self.pool(x)
return x
class Block2(nn.Module):
def __init__(self, layer_size):
super().__init__()
self.conv1 = nn.Conv2d(3, layer_size, 3, stride=1,padding=1)
self.conv2 = nn.Conv2d(layer_size, layer_size*2, 3, stride=1,padding=1)
self.pool = nn.MaxPool2d(2, 2)
self.conv3 = nn.Conv2d(layer_size*2, layer_size*8, 3, stride=1,padding=1)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = self.pool(x)
x = F.relu(self.conv3(x))
x = self.pool(x)
return x
class Block3(nn.Module):
def __init__(self, layer_size):
super().__init__()
self.conv1 = nn.Conv2d(3, layer_size, 3, stride=1,padding=1)
self.conv2 = nn.Conv2d(layer_size, layer_size*2, 3, stride=1,padding=1)
self.pool = nn.MaxPool2d(2, 2)
self.conv3 = nn.Conv2d(layer_size*2, layer_size*4, 3, stride=1,padding=1)
self.conv4 = nn.Conv2d(layer_size*4, layer_size*8, 3, stride=1, padding=1)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = self.pool(x)
x = F.relu(self.conv3(x))
x = F.relu(self.conv4(x))
x = self.pool(x)
return x
@model_wrapper
class Net(nn.Module):
def __init__(self):
super().__init__()
rand_var = nn.ValueChoice([32,64])
self.conv1 = nn.LayerChoice([Block1(rand_var),Block2(rand_var),Block3(rand_var)])
self.conv2 = nn.Conv2d(rand_var*8,rand_var*16 , 3, stride=1, padding=1)
self.fc1 = nn.Linear(rand_var*16*8*8, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(self.conv2(x))
x = x.reshape(x.shape[0],-1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
model = Net()
from nni.retiarii.experiment.pytorch import RetiariiExeConfig, RetiariiExperiment
exp = RetiariiExperiment(model, trainer, [], RL_strategy)
exp_config = RetiariiExeConfig('local')
exp_config.experiment_name = '5%_RL_10_epochs_64_batch'
exp_config.trial_concurrency = 2
exp_config.max_trial_number = 100
#exp_config.trial_gpu_number = 2
exp_config.max_experiment_duration = '660m'
exp_config.execution_engine = 'base'
exp_config.training_service.use_active_gpu = False
--> This led to the following error:
[2022-04-24 23:49:22] ERROR (nni.runtime.msg_dispatcher_base/Thread-5) 3
Traceback (most recent call last):
File "/Users/sh/opt/anaconda3/lib/python3.7/site-packages/nni/runtime/msg_dispatcher_base.py", line 88, in command_queue_worker
self.process_command(command, data)
File "/Users/sh/opt/anaconda3/lib/python3.7/site-packages/nni/runtime/msg_dispatcher_base.py", line 147, in process_command
command_handlers[command](data)
File "/Users/sh/opt/anaconda3/lib/python3.7/site-packages/nni/retiarii/integration.py", line 170, in handle_report_metric_data
self._process_value(data['value']))
File "/Users/sh/opt/anaconda3/lib/python3.7/site-packages/nni/retiarii/execution/base.py", line 111, in _intermediate_metric_callback
model = self._running_models[trial_id]
KeyError: 3
What does this error mean/ why does it occur/ how can I fix it?
Thanks for your help!
**Environment**:
- NNI version:
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/Nnictl.md#nnictl%20log%20stdout
-->
**How to reproduce it?**: | 1medium
|
Title: [BUG] Whoogle personal cloud docker instance suddenly showing arabic and right to left layout
Body: I setup an instance of whoogle on a ubuntu oracle cloud server last week. I used docker to get the latest version of whoogle and have been using it on my fedora laptop and my Pixel 6 phone. It's been working fine and the results have always been in English with the interface showing English too. Unfortunately about 1 or 2 hours ago I noticed I was getting search results in English but with the display showing right to left and the interface language showing in Arabic This happened on both my laptop and my phone. I actually thought someone had hacked my instance so deleted the docker instance and tried again but am still getting the issue.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'my whoogle instance url' - I don't want to share the url as I want to keep it personal
2. Click on 'search box and enter search term then return'
3. See error = search results shown in English with right to left formatting and the interface (like NEXT) showing in Arabic
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [x] Docker
- [ ] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [x] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [ ] Version [version number]
- [ ] Not sure
**Desktop (please complete the following information):**
- OS: [e.g. iOS] Fedora Silverblue on laptop and
- Browser [e.g. chrome, safari] Firefox
- Version [e.g. 22] 105.1
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6] Pixel 6
- OS: [e.g. iOS8.1] Android 13
- Browser [e.g. stock browser, safari] Bromite
- Version [e.g. 22] 106
**Additional context**
I tired setting the additional env variables for language and I seem to have fixed the mobile version with these additional settings but the desktop version is still showing the issues.
added env:
-e WHOOGLE_CONFIG_LANGUAGE=lang_en \
-e WHOOGLE_CONFIG_SEARCH_LANGUAGE=lang_en \
docker command:
docker run --restart=always --publish 5000:5000 --detach --name whoogle-search \
-e WHOOGLE_CONFIG_URL=https://xxx.xxx.xx\
-e WHOOGLE_CONFIG_THEME=system \
-e WHOOGLE_CONFIG_DISABLE=1\
-e WHOOGLE_CONFIG_ALTS=1 \
-e WHOOGLE_ALT_TW=xxx.xxx.xxx \
-e WHOOGLE_ALT_YT=xxx.xxx.xxx \
-e WHOOGLE_ALT_RD=xxx.xxx.xxx \
-e WHOOGLE_ALT_TL=xxx.xxx.xxx\
-e WHOOGLE_ALT_WIKI=xxx.xxx.xxx \
-e WHOOGLE_CONFIG_NEW_TAB=1 \
-e WHOOGLE_RESULTS_PER_PAGE=30 \
-e WHOOGLE_CONFIG_GET_ONLY=1 \
-e WHOOGLE_CONFIG_LANGUAGE=lang_en \
-e WHOOGLE_CONFIG_SEARCH_LANGUAGE=lang_en \
benbusby/whoogle-search:latest
Happy to share my personal URL with support for help with troubleshooting. I just don't want to post it publicly.
| 1medium
|
Title: I try use one backbone and neck to achieve a multitask model (include pose and seg)
Body: ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I have already reviewed the related topics in the issue and Repositories .
**Such as:**
https://github.com/ultralytics/ultralytics/issues/6949
https://github.com/ultralytics/ultralytics/pull/5219
https://github.com/ultralytics/ultralytics/issues/5073
https://github.com/yermandy/ultralytics/tree/multi-task-model
https://github.com/stedavkle/ultralytics/tree/multitask
https://github.com/JiayuanWang-JW/YOLOv8-multi-task
My `PoSeg` Repositories base on https://github.com/stedavkle/ultralytics/tree/multitask **(Thank stedavkle)**
Now My model has some error:
During the training process, the accuracy for both keypoints and segmentation masks is 0, as follows:
``` shell
Epoch GPU_mem box_loss pose_loss seg_loss kobj_loss cls_loss dfl_loss Instances Size
19/20 0G 3.686 9.066 5.869 4.282 1.412 0.7072 55 640: 100%|โโโโโโโโโโ| 2/2 [00:04<00:00, 2.05s/it]
Class Images Instances Box(P R mAP50 mAP50-95) Pose(P R mAP50 mAP50-95) Mask(P R mAP50 mAP50-95): 100%|โโโโโโโโโโ| 1/1 [00:00<00:00, 3.59it/s]
all 4 15 0 0 0 0 0 0 0 0 0 0 0 0
Epoch GPU_mem box_loss pose_loss seg_loss kobj_loss cls_loss dfl_loss Instances Size
20/20 0G 3.689 10.34 5.701 4.333 1.571 0.7128 83 640: 100%|โโโโโโโโโโ| 2/2 [00:04<00:00, 2.12s/it]
Class Images Instances Box(P R mAP50 mAP50-95) Pose(P R mAP50 mAP50-95) Mask(P R mAP50 mAP50-95): 100%|โโโโโโโโโโ| 1/1 [00:00<00:00, 3.54it/s]
all 4 15 0 0 0 0 0 0 0 0 0 0 0 0
```
I am not sure which part has gone wrong, leading to the inference accuracy being 0 for all parts
My yolo-poSeg address : https://github.com/Mosazh/yolo-poSeg
### Additional
_No response_ | 2hard
|
Title: Bad dependencies in v1.3.0
Body: The packaged v1.3.0 has a dependency of >=2, < 3 for Django version.
This should be relaxed to be >=2 in `setup.py` to match `requirements.txt`
In addition, whilst it requires Dash < 1.11 it doesn't constrain dash-core-components (1.9.0) or dash-renderer (1.3.0) which also leads to errors on installation.
| 1medium
|
Title: building go-bot in russian
Body: Good day!
I want to build a go-bot using DeepPavlov in russian language (on example of this [notebook](https://colab.research.google.com/github/deepmipt/DeepPavlov/blob/master/examples/gobot_extended_tutorial.ipynb)).
I created dataset by DSTC2 format. Now i want add NER training in go bot config pipline. Because my dataset includes **_names_** and **_phones_**. All possible variants i **can't** include in slot_vals.json
It is possible to implement on DeepPavlov? | 1medium
|
Title: Training with various input sizes?
Body: I have various photographs of different sizes that I am trying to train and I keep getting errors similar to `RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 16 and 17 in dimension 3`
I've tried setting `--preprocess` to either `none` or `scale_width` and I have tried setting the `batch_size` to 1. Is it possible to input images of different rectangular sizes for training and testing? | 1medium
|
Title: Factories cannot randomly generate missing parameters for child factories if all params passed on higher level
Body: When at least one field doesn't passed in nested objects all child objects created right way:
```python
from pydantic_factories import ModelFactory
from pydantic import BaseModel
class A(BaseModel):
name: str
age: int
class B(BaseModel):
a: A
name: str # THIS LINE DIFFERENT TO NEXT EXAMPLE
class C(BaseModel):
b: B
name: str
class CFactory(ModelFactory):
__model__ = C
CFactory.build(**{'b': {'a': {'name': 'test'}}})
# C(b=B(a=A(name='test', age=8345), name='dLiQxkFuLvlMINwbCkbp'), name='uWGxEDUWlAejTgMePGXZ')
```
However if we pass all fields in nested objects then nested objects creation ignored:
```python
from pydantic_factories import ModelFactory
from pydantic import BaseModel
class A(BaseModel):
name: str
age: int
class B(BaseModel):
a: A
# name: str # THIS LINE DIFFERENT TO PREV EXAMPLE
class C(BaseModel):
b: B
name: str
class CFactory(ModelFactory):
__model__ = C
CFactory.build(**{'b': {'a': {'name': 'test'}}})
```
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In [19], line 1
----> 1 CFactory.build(**{'b': {'a': {'name': 'test'}}})
File ./venv/lib/python3.10/site-packages/pydantic_factories/factory.py:724, in ModelFactory.build(cls, factory_use_construct, **kwargs)
721 return cast("T", cls.__model__.construct(**kwargs))
722 raise ConfigurationError("factory_use_construct requires a pydantic model as the factory's __model__")
--> 724 return cast("T", cls.__model__(**kwargs))
File ./venv/lib/python3.10/site-packages/pydantic/main.py:342, in BaseModel.__init__(__pydantic_self__, **data)
340 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
341 if validation_error:
--> 342 raise validation_error
343 try:
344 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for C
b -> a -> age
field required (type=value_error.missing)
```
that explained by next logic https://github.com/starlite-api/pydantic-factories/blob/main/pydantic_factories/factory.py#L200
expected that child objects created in both cases | 1medium
|
Title: [Feature] Add Key-value attributes/properties
Body: **Is your feature request related to a problem? Please describe.**
For a data set which is going to be used for instance segmentation, I want to add for each annotation certain properties with non-discrete values. For example I have dataset of objects and I want to add a mass attribute and add a ground-truth mass (which is a floating number) to the annotated object. In this case the current labelflags doesn't suffice in this case.
**Describe the solution you'd like**
Each time you have to choose a label for the annotation, you also have the option to select a certain attribute, and for a selected attribute you have to fill in a number, string or whatever.
| 1medium
|
Title: [BUG] ๆ้ณ-่ทๅๆๅฎ่ง้ข็่ฏ่ฎบๅๅคๆฐๆฎ ่ฟๅ400
Body: ๅคงไฝฌไฝ ๅฅฝ, ๆๅป้กน็ฎๅ,ๆต่ฏ ๆ้ณ-่ทๅๆๅฎ่ง้ข็่ฏ่ฎบๅๅคๆฐๆฎ ่ฟๅ400
ไนๅไป็ปๆฅ็ๆๆกฃ,ๅนถๅจไฝ ็ๅจ็บฟๆฅๅฃๆต่ฏๅๆ ทไน่ฟๅ400
https://douyin.wtf/docs#/Douyin-Web-API/fetch_video_comments_reply_api_douyin_web_fetch_video_comment_replies_get

| 1medium
|
Title: pro ๅปบ่ฎฎๅขๅ ้กต้ข็ผ่พ
Body: 


| 1medium
|
Title: LAST_INSERT_ID() returns 0 when sharing a cursor.execute(...) call with INSERT
Body: Windows 10 Home 1903 (18362.720)
Python 3.8.2 x64, aiomysql 0.0.20
MariaDB 10.4
Using `Pool` with `auto_commit=True`
Querying `SELECT LAST_INSERT_ID();` within the same `await cursor.execute(...)` call as the `INSERT...;` query the `LAST_INSERT_ID()` should be referencing causes the next `await cursor.fetchone()` call to yield an empty list (this is on a "clean" cursor).
A current workaround involves splitting the `SELECT LAST_INSERT_ID();` and `INSERT...;` queries into separate `await cursor.execute(...)` calls. This yields expected behaviour of `await cursor.fetchone()`.
I have not tested whether this behaviour is inherited from PyMySQL, rather than aiomysql.
Considering most documentation on SQL best practice indicates it is ideal to keep the `SELECT LAST_INSERT_ID();` close to whatever `AUTO_INCREMENT` it is referencing, requiring two `await cursor.execute(...)` calls may be considered a non-ideal case. Please do let me know if I have configured my environment in a fashion that causes this behaviour.
Many thanks
fwf | 1medium
|
Title: Missing photos counter wrong
Body: ## ๐ Bug Report
### What Operating system and version is LibrePhotos running on:
unknown
### What architecture is LibrePhotos running on:
x64
### How is LibrePhotos installed:
Docker
### Description of issue:
[u/nagarrido_96](https://old.reddit.com/user/nagarrido_96) on [reddit reports](https://old.reddit.com/r/librephotos/comments/w240mi/missing_photos_counter_wrong/): I have a test instance for librephotos, and recently I deleted all photos but one (directly from the source folder). When I click "remove missing photos" the missing photos get deleted, but the counter for total photos and missing photos does not reset. Has this happened to anyone?
### How can we reproduce it:
- Delete images from folder
- Run "Delete Missing" job
- Counter does not change
| 1medium
|
Title: [Feature request] Provide a means to convert to numpy array without byteswapping
Body: ### System information
ONNX 1.15
### What is the problem that this feature solves?
Issue onnx/tensorflow-onnx#1902 in tf2onnx occurs on big endian systems, and it is my observation that attributes which end up converting to integers are incorrectly byteswapped because the original data resided within a tensor. If `numpy_helper.to_array()` could be updated to optionally not perform byteswapping, then that could help solve this issue.
### Alternatives considered
As an alternative, additional logic could be added in tf2onnx to perform byteswapping on the data again, but this seems excessive.
### Describe the feature
I believe this feature is necessary to improve support for big endian systems.
### Will this influence the current api (Y/N)?
_No response_
### Feature Area
converters
### Are you willing to contribute it (Y/N)
Yes
### Notes
_No response_ | 1medium
|
Title: Export torch script with dynamic batch , nms and FP16
Body: ### Search before asking
- [x] I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
### Description
I want to get this model for Triton Server with pytorch backend
### Use case
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | 2hard
|
Title: Extra ) in apps.py?
Body: ### Issue Summary
Is there an extra parenthesis [here](https://github.com/django-oscar/django-oscar/blob/master/src/oscar/apps/catalogue/reviews/apps.py#L26)?
Should
```
path('<int:pk>)/vote/', login_required(self.vote_view.as_view()), name='reviews-vote'),
```
be
```
path('<int:pk>/vote/', login_required(self.vote_view.as_view()), name='reviews-vote'),
```
? | 0easy
|
Title: Suggest customizing the model directory
Body: Suggest customizing the model directory
Download to C drive by default
Will occupy C drive space
--------------------------------------------
It is best to display a list of models and their addresses
Convenient for users to download in bulk
```python
requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: /labelmeai/efficient-sam/releases/download/onnx-models-20231225/efficient_sam_vits_decoder.onnx (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000001C46F7A9490>, 'Connection to github.com timed out. (connect timeout=None)'))
```
Users often cannot download models
----------------------------------------------
This way, users can download all models
Put it in the custom directory | 1medium
|
Title: No registered handler for event
Body: ### The problem
I have a Reolink E1 Zoom camera and I can see the following in the HA log:
2025-03-12 10:54:27.698 WARNING (MainThread) [homeassistant.components.onvif] Cam1: No registered handler for event from c4:3c:b0:07:40:80: {
'SubscriptionReference': None,
'Topic': {
'_value_1': 'tns1:RuleEngine/MyRuleDetector/Package',
'Dialect': 'http://www.onvif.org/ver10/tev/topicExpression/ConcreteSet',
'_attr_1': {
}
},
'ProducerReference': None,
'Message': {
'_value_1': {
'Source': {
'SimpleItem': [
{
'Name': 'Source',
'Value': '000'
}
],
'ElementItem': [],
'Extension': None,
'_attr_1': None
},
'Key': None,
'Data': {
'SimpleItem': [
{
'Name': 'State',
'Value': 'false'
}
],
'ElementItem': [],
'Extension': None,
'_attr_1': None
},
'Extension': None,
'UtcTime': datetime.datetime(2025, 3, 12, 9, 54, 27, tzinfo=datetime.timezone.utc),
'PropertyOperation': 'Initialized',
'_attr_1': {
}
}
}
}
### What version of Home Assistant Core has the issue?
core-2025.3.2
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant Container
### Integration causing the issue
Onvif
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
2025-03-12 10:54:27.698 WARNING (MainThread) [homeassistant.components.onvif] Cam1: No registered handler for event from c4:3c:b0:07:40:80: {
'SubscriptionReference': None,
'Topic': {
'_value_1': 'tns1:RuleEngine/MyRuleDetector/Package',
'Dialect': 'http://www.onvif.org/ver10/tev/topicExpression/ConcreteSet',
'_attr_1': {
}
},
'ProducerReference': None,
'Message': {
'_value_1': {
'Source': {
'SimpleItem': [
{
'Name': 'Source',
'Value': '000'
}
],
'ElementItem': [],
'Extension': None,
'_attr_1': None
},
'Key': None,
'Data': {
'SimpleItem': [
{
'Name': 'State',
'Value': 'false'
}
],
'ElementItem': [],
'Extension': None,
'_attr_1': None
},
'Extension': None,
'UtcTime': datetime.datetime(2025, 3, 12, 9, 54, 27, tzinfo=datetime.timezone.utc),
'PropertyOperation': 'Initialized',
'_attr_1': {
}
}
}
}
```
### Additional information
_No response_ | 1medium
|
Title: [Feature request] Speed up voice cloning from the same speaker
Body: <!-- Welcome to the ๐ธTTS project!
We are excited to see your interest, and appreciate your support! --->
**๐ Feature Description**
Thanks to the project.
We had a try and the result is pretty good.
However, there is one major issue, the voice cloning speed is slow, especially inference by CPU.
We might need to generate the voice several times from the same speaker, could we speed up the process?
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Solution**
Here is how we use the code:
```
# Init TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)
# Text to speech to a file
tts.tts_to_file(text=words, language="zh-cn", file_path=out, speaker_wav="data/audio/sun1.wav")
```
Since we might need to clone the same speaker, and generate the voice for several times, is is possible to speed up the process?
Could we export some middle results or fine tuned model, and reused or reloaded the model next time?
We might expect the voice generate speed could be as fast as using the single model.
<!-- A clear and concise description of what you want to happen. -->
**Alternative Solutions**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
| 1medium
|
Title: [BUG] @model_validator(mode="before") issue
Body: **Describe the bug**
When using something like this, `values` is a `DjangoGetter` instance which is somewhat unexpected to me. Prior to 1.0 and the pydantic update you would get a plain dictionary.
```python
class SomeSchema(Schema):
somevar:int
@model_validator(mode="before")
@classmethod
def foo(cls, values):
values.get("something")
```
**Versions (please complete the following information):**
- Python version: 3.10
- Django version: 4.1
- Django-Ninja version: 1.0.1
- Pydantic version: 2.5.1
| 1medium
|
Title: Create a 'hidden-digest-auth'
Body: Add a new service "hidden-digest-auth" similar to "hidden-basic-auth" returning 404 instead of 401. This is specially necessary because digest authentication responding with 401 in ajax applications will cause the browser to always prompt the user for the credentials in the first attempt.
| 1medium
|
Title: I want to train fast-rcnn model but it is got a bug :TypeError: __init__() got an unexpected keyword argument 'proposal_file'
Body: after_run:
(BELOW_NORMAL) LoggerHook
--------------------
Traceback (most recent call last):
File "tools/train.py", line 121, in <module>
main()
File "tools/train.py", line 117, in main
runner.train()
File "/home/password123456/anaconda3/envs/fast-rcnn/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1728, in train
self._train_loop = self.build_train_loop(
File "/home/password123456/anaconda3/envs/fast-rcnn/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1520, in build_train_loop
loop = LOOPS.build(
File "/home/password123456/anaconda3/envs/fast-rcnn/lib/python3.8/site-packages/mmengine/registry/registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "/home/password123456/anaconda3/envs/fast-rcnn/lib/python3.8/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
obj = obj_cls(**args) # type: ignore
File "/home/password123456/anaconda3/envs/fast-rcnn/lib/python3.8/site-packages/mmengine/runner/loops.py", line 44, in __init__
super().__init__(runner, dataloader)
File "/home/password123456/anaconda3/envs/fast-rcnn/lib/python3.8/site-packages/mmengine/runner/base_loop.py", line 26, in __init__
self.dataloader = runner.build_dataloader(
File "/home/password123456/anaconda3/envs/fast-rcnn/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1370, in build_dataloader
dataset = DATASETS.build(dataset_cfg)
File "/home/password123456/anaconda3/envs/fast-rcnn/lib/python3.8/site-packages/mmengine/registry/registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "/home/password123456/anaconda3/envs/fast-rcnn/lib/python3.8/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
obj = obj_cls(**args) # type: ignore
TypeError: __init__() got an unexpected keyword argument 'proposal_file' | 1medium
|
Title: Enable plt.close() to clear memory
Body: I've been using Vizualiser to extract the variables elbow_value_ and elbow_score_ for multiple batches of training data. While looping through each batch, the corresponding figure is automatically rendered and plotted which consumes a lot of memory. I suggest enabling the option for plt.close() or even to skip plt.show() to improve performance. | 1medium
|
Title: Split out individual events in the event base.py to their own files
Body: Things are getting real crowded in there. | 1medium
|
Title: Reusing 'parametrize' values?
Body: Hi, is there any way to reuse parameterize values?
For example, I have the following test configuration:
```yaml
---
test_name: Successfully returns the auto registration history
includes:
- !include stage.async-result.yaml
marks:
- parametrize:
key: vin
vals:
- > A lot of values here <
stages:
- name: Trigger the task
request:
url: "http://localhost:8000/auto/history/{vin}"
method: GET
response:
status_code: 202
save:
$ext:
function: utils:url_from_refresh_header
- type: ref
id: async_result
---
test_name: Successfully returns auto accidents
includes:
- !include stage.async-result.yaml
marks:
- parametrize:
key: vin
vals:
- > A lot of values here <
stages:
- name: Trigger the task
request:
url: "http://localhost:8000/auto/accidents/{vin}"
method: GET
response:
status_code: 202
save:
$ext:
function: utils:url_from_refresh_header
- type: ref
id: async_result
```
How can I define the parametrize values in one place and reuse them? | 1medium
|
Title: Uwsgi using the wrong python version
Body: Hello,
I'm trying to update my app to Python 3.7 with new `tiangolo/uwsgi-nginx-flask:python3.7-alpine3.8` image. When launching the image with Docker compose, it seems like uwsgi is using Python 3.6.6 instead.
```
Attaching to fakeimg
fakeimg | Checking for script in /app/prestart.sh
fakeimg | There is no script /app/prestart.sh
fakeimg | /usr/lib/python2.7/site-packages/supervisor/options.py:461: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
fakeimg | 'Supervisord is running as root and it is searching '
fakeimg | 2019-02-19 12:48:32,061 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
fakeimg | 2019-02-19 12:48:32,061 INFO Included extra file "/etc/supervisor.d/supervisord.ini" during parsing
fakeimg | 2019-02-19 12:48:32,076 INFO RPC interface 'supervisor' initialized
fakeimg | 2019-02-19 12:48:32,076 CRIT Server 'unix_http_server' running without any HTTP authentication checking
fakeimg | 2019-02-19 12:48:32,081 INFO supervisord started with pid 1
fakeimg | 2019-02-19 12:48:33,083 INFO spawned: 'nginx' with pid 10
fakeimg | 2019-02-19 12:48:33,086 INFO spawned: 'uwsgi' with pid 11
fakeimg | [uWSGI] getting INI configuration from /app/uwsgi.ini
fakeimg | [uWSGI] getting INI configuration from /etc/uwsgi/uwsgi.ini
fakeimg |
fakeimg | ;uWSGI instance configuration
fakeimg | [uwsgi]
fakeimg | cheaper = 2
fakeimg | processes = 16
fakeimg | ini = /app/uwsgi.ini
fakeimg | module = main
fakeimg | callable = app
fakeimg | wsgi-disable-file-wrapper = true
fakeimg | ini = /etc/uwsgi/uwsgi.ini
fakeimg | socket = /tmp/uwsgi.sock
fakeimg | chown-socket = nginx:nginx
fakeimg | chmod-socket = 664
fakeimg | hook-master-start = unix_signal:15 gracefully_kill_them_all
fakeimg | need-app = true
fakeimg | die-on-term = true
fakeimg | plugin = python3
fakeimg | show-config = true
fakeimg | ;end of configuration
fakeimg |
fakeimg | *** Starting uWSGI 2.0.17 (64bit) on [Tue Feb 19 12:48:33 2019] ***
fakeimg | compiled with version: 6.4.0 on 01 May 2018 17:28:25
fakeimg | os: Linux-4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018
fakeimg | nodename: 786a77edca64
fakeimg | machine: x86_64
fakeimg | clock source: unix
fakeimg | pcre jit disabled
fakeimg | detected number of CPU cores: 4
fakeimg | current working directory: /app
fakeimg | detected binary path: /usr/sbin/uwsgi
fakeimg | your memory page size is 4096 bytes
fakeimg | detected max file descriptor number: 1048576
fakeimg | lock engine: pthread robust mutexes
fakeimg | thunder lock: disabled (you can enable it with --thunder-lock)
fakeimg | uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
fakeimg | uWSGI running as root, you can use --uid/--gid/--chroot options
fakeimg | *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
fakeimg | Python version: 3.6.6 (default, Aug 24 2018, 05:04:18) [GCC 6.4.0]
fakeimg | *** Python threads support is disabled. You can enable it with --enable-threads ***
fakeimg | Python main interpreter initialized at 0x56004504cfa0
```
This is making my application crash, because every package has been installed for Python 3.7.2.
The project is [here](https://github.com/Rydgel/Fake-images-please) if you need to view the Dockerfile. | 1medium
|
Title: Surprising FIFO behaviour of lifecycle hooks
Body: ## Describe the (maybe) Bug
I'm surprised that various lifecycle hooks (`on_operation`, `on_parse`, `on_validate`, `on_execute`) are completed in a FIFO fashion, rather than LIFO.
I would expect that if we're wrapping an operation with `on_operation` with 2 extension the following will happen (LIFO):
* First extension starts (before `yield` part)
* Second extension starts (before `yield` part)
* Second extension completes (after `yield` part)
* First extension completes (after `yield` part)
However, the order I'm _actually_ seeing is the following (FIFO):
* First extension starts (before `yield` part)
* Second extension starts (before `yield` part)
* First extension completes (after `yield` part)
* Second extension completes (after `yield` part)
I'm concerned about it because extension can mutate state, so it would be good for them to behave like a context manager. [Example of state mutation.](https://strawberry.rocks/docs/guides/custom-extensions#execution-context)
In fact, I do find it surprising that this is how things work out. Notably, overriding `resolve` doesn't have the same problem โ but it also happens in a slightly different way.
## Repro details
Here're some toy extensions I built to investigate things:
```python
class MyCustomExtension(SchemaExtension):
id = "?"
@override
async def on_validate(self) -> AsyncGenerator[None, None]:
print(f"GraphQL validation start ({self.id})")
yield
print(f"GraphQL validation end ({self.id})")
@override
def on_parse(self) -> Generator[None, None, None]:
print(f"GraphQL parsing start ({self.id})")
yield
print(f"GraphQL parsing end ({self.id})")
@override
def on_execute(self) -> Generator[None, None, None]:
print(f"GraphQL execution start ({self.id})")
yield
print(f"GraphQL execution end ({self.id})")
@override
def on_operation(self) -> Generator[None, None, None]:
print(f"GraphQL operation start ({self.id})")
yield
print(f"GraphQL operation end ({self.id})")
@override
async def resolve(
self,
_next: Callable[..., object],
root: object,
info: GraphQLResolveInfo,
*args,
**kwargs,
) -> AwaitableOrValue[object]:
random_id = randint(0, 1000)
print(f"GraphQL resolver {random_id} start ({self.id})")
result = await await_maybe(_next(root, info, *args, **kwargs))
print(f"GraphQL resolver {random_id} end ({self.id})")
return result
class MyCustomExtensionA(MyCustomExtension):
id = "A"
class MyCustomExtensionB(MyCustomExtension):
id = "B"
```
I'm testing it by running a simple query against a GraphQL Schema:
```python
@strawberry.type
class Me:
id: str
@strawberry.type
class Query:
@strawberry.field
@staticmethod
async def me() -> Me:
return Me(id="foo")
schema = MySchema(
query=Query,
extensions=[MyCustomExtensionA, MyCustomExtensionB],
)
```
When running a simple GraphQL query against this schema:
```graphql
query {
me { id }
}
```
I see the following lines being printed:
```
GraphQL operation start (A)
GraphQL operation start (B)
GraphQL parsing start (A)
GraphQL parsing start (B)
GraphQL parsing end (A)
GraphQL parsing end (B)
GraphQL validation start (A)
GraphQL validation start (B)
GraphQL validation end (A)
GraphQL validation end (B)
GraphQL execution start (A)
GraphQL execution start (B)
GraphQL resolver 598 start (B)
GraphQL resolver 975 start (A)
GraphQL resolver 975 end (A)
GraphQL resolver 598 end (B)
GraphQL resolver 196 start (B)
GraphQL resolver 638 start (A)
GraphQL resolver 638 end (A)
GraphQL resolver 196 end (B)
GraphQL execution end (A)
GraphQL execution end (B)
GraphQL operation end (A)
GraphQL operation end (B)
```
## System Information
- Strawberry version (if applicable): `0.220.0` | 1medium
|
Title: Conda install -c esri arcgis not working !
Body: **Describe the bug**
I can't install Arcgis API with anaconda distribution. I insert at Anaconda Prompt in a new enviroment
conda install -c esri arcgis
My virtual env called ArcGIS_API and I have not installed any other packages.
error:
```UnsatisfiableError:
Note that strict channel priority may have removed packages required for satisfiability.
```
**Screenshots**

**Expected behavior**
I was expexted the installation of ArcGIS API.
**Platform (please complete the following information):**
- OS: Windows 11
| 1medium
|
Title: Legend position gets overwritten / can not be set
Body: When plotting with the new `seaborn.objects` Plot method including a legend (like `.add(Line(), legend=True)`), the position of the legend gets hardcoded to specific positions (https://github.com/mwaskom/seaborn/blob/master/seaborn/_core/plot.py#L1612-L1624), which differ depending on the usage of pyplot (legend gets pulled on top of the right side over the graph's content) or not (e.g. in a Jupyter notebook, or on `.save()` ,the legend gets place to the right side outside of the plot graph area).
If the users want a different position of the legend (possible especially in the case of pyplot usage, where the legend may easily be generated over important data points), they have no possibility to move it to another place. The old (=not `seaborn.objects`) method [`move_legend()`](https://seaborn.pydata.org/generated/seaborn.move_legend.html) does not work for `Plot` objects, neither can the seaborn objects `.theme()` approach be used, with setting matplotlib rc params (which would be [`legend.loc`](https://matplotlib.org/stable/tutorials/introductory/customizing.html#:~:text=legend.loc) in conjunction with `bbox_to_anchor` of the [matplotlib legend](https://matplotlib.org/stable/api/legend_api.html#matplotlib.legend.Legend)), because the `loc` is hardcoded, and will such be overwritten, additionally there is no possibility at all for the users to set `bbox_to_anchor` for the legend.
The PR https://github.com/mwaskom/seaborn/pull/3128 solves this issue, through staying at the current default behavior, if no additional parameters are given, but respecting the matplotlib rc params entry for `legend.loc` if it is given like `.theme({"legend.loc": "upper left"})`.
Additionally it gives the new possibility to set the `bbox_to_anchor` of the legend, like `.theme({"legend_loc__bbox_to_anchor": (0.1, 0.95)})` (where the kind this is implemented might not be ideal).
This gives the users full control over the legend position.
| 1medium
|
Title: XGBOD model file is too large
Body: Hello!
I spent an hour on my data set training an XGBOD model, and it worked well on the test set, but after saving the model file with PICKLE, I found that the file size was 1.2G!
Is there a way to reduce the size of the model file ?
(the size of my data set is (30412, 86)) | 1medium
|
Title: Multi-gpu training
Body: Hi, I am stuck on how multi-GPU training would work for loss functions with more than one negative, particularly [NTXentLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#ntxentloss).
In [SimCLR](https://arxiv.org/abs/2002.05709), the number of _negatives_ per some _positive_ pair is taken to be `2 * (N - 1)` (all examples in a minibatch of size `N` that don't belong to that positive pair), and they find ([as other works before them](https://github.com/KevinMusgrave/pytorch-metric-learning)) that the bigger the batch size, the larger the number of negatives, and the better the learned representations.
`DataParallel` and `DistributedDataParallel` divide up a mini-batch, send each partition of examples to a GPU, compute gradients, and then average these gradients before backpropagating. But this means that each GPU is computing a loss with `N/n_gpus` examples and, therefore, `2 * (N/n_gpus - 1)` negatives per positive pair.
My question is: how might I use a loss function from this library, such as [NTXentLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#ntxentloss), with `DataParallel` or `DistributedDataParallel` that avoids this "issue"? I.e. that allows me to use multiple GPUs while maintaining `2 * (N - 1)` negatives per positive pair. | 2hard
|
Title: [BUG] Trailing newlines when using progress bar in notebook
Body: - [X] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [X] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
First of all, thanks a lot for this library!
Displaying a progress bar in a jupyter notebook leads to several trailing newlines being added, seemingly when the progress bar completes. The issue doesn't occur when running the same code in a terminal. Might be related to #2711.

<details>
<summary> Click to see source code </summary>
```python
from time import sleep
from rich.progress import Progress
with Progress() as progress:
task = progress.add_task("Working", total=10, metrics="")
for batch in range(10):
sleep(0.1)
progress.update(task, advance=1, refresh=True)
print("There's a lot of space above me")
```
</details>
**Platform**
<details>
<summary>Click to expand</summary>
```python
โญโโโโโโโโโโโโโโโโโโโโโโ <class 'rich.console.Console'> โโโโโโโโโโโโโโโโโโโโโโโฎ
โ A high level console interface. โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ <console width=115 ColorSystem.TRUECOLOR> โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ color_system = 'truecolor' โ
โ encoding = 'utf-8' โ
โ file = <ipykernel.iostream.OutStream object at 0x7f799c8dc9d0> โ
โ height = 100 โ
โ is_alt_screen = False โ
โ is_dumb_terminal = False โ
โ is_interactive = False โ
โ is_jupyter = True โ
โ is_terminal = False โ
โ legacy_windows = False โ
โ no_color = False โ
โ options = ConsoleOptions( โ
โ size=ConsoleDimensions(width=115, height=100), โ
โ legacy_windows=False, โ
โ min_width=1, โ
โ max_width=115, โ
โ is_terminal=False, โ
โ encoding='utf-8', โ
โ max_height=100, โ
โ justify=None, โ
โ overflow=None, โ
โ no_wrap=False, โ
โ highlight=None, โ
โ markup=None, โ
โ height=None โ
โ ) โ
โ quiet = False โ
โ record = False โ
โ safe_box = True โ
โ size = ConsoleDimensions(width=115, height=100) โ
โ soft_wrap = False โ
โ stderr = False โ
โ style = None โ
โ tab_size = 8 โ
โ width = 115 โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโโโ <class 'rich._windows.WindowsConsoleFeatures'> โโโโโฎ
โ Windows features available. โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ WindowsConsoleFeatures(vt=False, truecolor=False) โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ truecolor = False โ
โ vt = False โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโโโโโโ Environment Variables โโโโโโโโฎ
โ { โ
โ 'TERM': 'xterm-color', โ
โ 'COLORTERM': 'truecolor', โ
โ 'CLICOLOR': '1', โ
โ 'NO_COLOR': None, โ
โ 'TERM_PROGRAM': 'vscode', โ
โ 'COLUMNS': None, โ
โ 'LINES': None, โ
โ 'JUPYTER_COLUMNS': None, โ
โ 'JUPYTER_LINES': None, โ
โ 'JPY_PARENT_PID': '18491', โ
โ 'VSCODE_VERBOSE_LOGGING': None โ
โ } โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
platform="Linux"
```
</details>
| 1medium
|
Title: KeyError when indexing into Series after calling `to_series` on Scalar
Body: <!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
**Minimal Complete Verifiable Example**:
```python
In [1]: import dask.dataframe as dd
In [2]: import pandas as pd
In [3]: data = {"a": [1, 3, 2]}
In [4]: df = dd.from_pandas(pd.DataFrame(data), npartitions=2)
In [5]: df['a'].sum().to_series().fillna(0)[0].compute()
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/pandas/core/indexes/base.py:3805, in Index.get_loc(self, key)
3804 try:
-> 3805 return self._engine.get_loc(casted_key)
3806 except KeyError as err:
File index.pyx:167, in pandas._libs.index.IndexEngine.get_loc()
File index.pyx:196, in pandas._libs.index.IndexEngine.get_loc()
File pandas/_libs/hashtable_class_helper.pxi:2606, in pandas._libs.hashtable.Int64HashTable.get_item()
File pandas/_libs/hashtable_class_helper.pxi:2630, in pandas._libs.hashtable.Int64HashTable.get_item()
KeyError: 0
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
Cell In[5], line 1
----> 1 df['a'].sum().to_series().fillna(0)[0].compute()
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask/dataframe/dask_expr/_collection.py:489, in FrameBase.compute(self, fuse, concatenate, **kwargs)
487 out = out.repartition(npartitions=1)
488 out = out.optimize(fuse=fuse)
--> 489 return DaskMethodsMixin.compute(out, **kwargs)
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask/base.py:374, in DaskMethodsMixin.compute(self, **kwargs)
350 def compute(self, **kwargs):
351 """Compute this dask collection
352
353 This turns a lazy Dask collection into its in-memory equivalent.
(...)
372 dask.compute
373 """
--> 374 (result,) = compute(self, traverse=False, **kwargs)
375 return result
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask/base.py:662, in compute(traverse, optimize_graph, scheduler, get, *args, **kwargs)
659 postcomputes.append(x.__dask_postcompute__())
661 with shorten_traceback():
--> 662 results = schedule(dsk, keys, **kwargs)
664 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/pandas/core/indexes/base.py:3812, in Index.get_loc(self, key)
3807 if isinstance(casted_key, slice) or (
3808 isinstance(casted_key, abc.Iterable)
3809 and any(isinstance(x, slice) for x in casted_key)
3810 ):
3811 raise InvalidIndexError(key)
-> 3812 raise KeyError(key) from err
3813 except TypeError:
3814 # If we have a listlike key, _check_indexing_error will raise
3815 # InvalidIndexError. Otherwise we fall through and re-raise
3816 # the TypeError.
3817 self._check_indexing_error(key)
KeyError: 0
```
If I `compute` before the `[0]`, then I get:
```
In [6]: df['a'].sum().to_series().fillna(0).compute()
Out[6]:
0 6
dtype: int64
```
so I'd have expected the `[0]` to work
**Anything else we need to know?**:
spotted in Narwhals
**Environment**:
- Dask version: 2025.1.0
- Python version: 3.12
- Operating System: linux
- Install method (conda, pip, source): pip
| 1medium
|
Title: quality is insanely horrid
Body: even when using large recordings, uncompressed .wav, short audio "text", its just hissing and weird glitchyness.. | 2hard
|
Title: Can't build the thing, requires outdated visual c++
Body: C:\MyProjects\___MECHANIKOS\GPUCloudDeepLearningResearch>pip install aiohttp aiortc opencv-python
Requirement already satisfied: aiohttp in c:\python36-32\lib\site-packages (3.5.4)
Collecting aiortc
Using cached https://files.pythonhosted.org/packages/ba/ae/35360b00e9f03103bebb553c37220f721f766f90490b4203912cfadf0be2/aiortc-0.9.18.tar.gz
Collecting opencv-python
Using cached https://files.pythonhosted.org/packages/49/4b/ad55a2e2c309fb698e1283e687129e0892c7864de9a4424c4ff01ba0a3bb/opencv_python-4.0.0.21-cp36-cp36m-win32.whl
Requirement already satisfied: chardet<4.0,>=2.0 in c:\python36-32\lib\site-packages (from aiohttp) (3.0.4)
Requirement already satisfied: typing-extensions>=3.6.5; python_version < "3.7" in c:\python36-32\lib\site-packages (from aiohttp) (3.7.2)
Requirement already satisfied: idna-ssl>=1.0; python_version < "3.7" in c:\python36-32\lib\site-packages (from aiohttp) (1.1.0)
Requirement already satisfied: yarl<2.0,>=1.0 in c:\python36-32\lib\site-packages (from aiohttp) (1.3.0)
Requirement already satisfied: async-timeout<4.0,>=3.0 in c:\python36-32\lib\site-packages (from aiohttp) (3.0.1)
Requirement already satisfied: multidict<5.0,>=4.0 in c:\python36-32\lib\site-packages (from aiohttp) (4.5.2)
Requirement already satisfied: attrs>=17.3.0 in c:\python36-32\lib\site-packages (from aiohttp) (18.2.0)
Requirement already satisfied: aioice<0.7.0,>=0.6.12 in c:\python36-32\lib\site-packages (from aiortc) (0.6.12)
Collecting av<7.0.0,>=6.1.0 (from aiortc)
Using cached https://files.pythonhosted.org/packages/15/80/edc9e110b2896ebe16863051e68bd4786efeda71ce94b81a048d146062cc/av-6.1.0.tar.gz
Requirement already satisfied: cffi>=1.0.0 in c:\users\fruitfulapproach\appdata\roaming\python\python36\site-packages (from aiortc) (1.11.5)
Collecting crc32c (from aiortc)
Collecting cryptography>=2.2 (from aiortc)
Using cached https://files.pythonhosted.org/packages/af/d7/9e6442de1aa61d3268e5abd7fb73b130cfc2e42439a7db42248653844593/cryptography-2.4.2-cp36-cp36m-win32.whl
Collecting pyee (from aiortc)
Using cached https://files.pythonhosted.org/packages/8e/06/10c18578e2d8b9cf9902f424f86d433c647ca55e82293100f53e6c0afab4/pyee-5.0.0-py2.py3-none-any.whl
Collecting pylibsrtp>=0.5.6 (from aiortc)
Using cached https://files.pythonhosted.org/packages/77/21/0a65a6ea02879fd4af7f6e137cb4fb14a64f72f8557112408462fc43124f/pylibsrtp-0.6.0-cp36-cp36m-win32.whl
Collecting pyopenssl (from aiortc)
Using cached https://files.pythonhosted.org/packages/96/af/9d29e6bd40823061aea2e0574ccb2fcf72bfd6130ce53d32773ec375458c/pyOpenSSL-18.0.0-py2.py3-none-any.whl
Requirement already satisfied: numpy>=1.11.3 in c:\python36-32\lib\site-packages (from opencv-python) (1.14.5)
Requirement already satisfied: idna>=2.0 in c:\python36-32\lib\site-packages (from idna-ssl>=1.0; python_version < "3.7"->aiohttp) (2.6)
Requirement already satisfied: netifaces in c:\python36-32\lib\site-packages (from aioice<0.7.0,>=0.6.12->aiortc) (0.10.9)
Requirement already satisfied: pycparser in c:\users\fruitfulapproach\appdata\roaming\python\python36\site-packages (from cffi>=1.0.0->aiortc) (2.19)
Collecting asn1crypto>=0.21.0 (from cryptography>=2.2->aiortc)
Using cached https://files.pythonhosted.org/packages/ea/cd/35485615f45f30a510576f1a56d1e0a7ad7bd8ab5ed7cdc600ef7cd06222/asn1crypto-0.24.0-py2.py3-none-any.whl
Requirement already satisfied: six>=1.4.1 in c:\python36-32\lib\site-packages (from cryptography>=2.2->aiortc) (1.11.0)
Building wheels for collected packages: aiortc, av
Running setup.py bdist_wheel for aiortc ... error
Complete output from command c:\python36-32\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\FRUITF~1\\AppData\\Local\\Temp\\pip-install-5l34ichq\\aiortc\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\FRUITF~1\AppData\Local\Temp\pip-wheel-k1arcvoo --python-tag cp36:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win32-3.6
creating build\lib.win32-3.6\aiortc
copying aiortc\clock.py -> build\lib.win32-3.6\aiortc
copying aiortc\events.py -> build\lib.win32-3.6\aiortc
copying aiortc\exceptions.py -> build\lib.win32-3.6\aiortc
copying aiortc\jitterbuffer.py -> build\lib.win32-3.6\aiortc
copying aiortc\mediastreams.py -> build\lib.win32-3.6\aiortc
copying aiortc\rate.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcconfiguration.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcdatachannel.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcdtlstransport.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcicetransport.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcpeerconnection.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcrtpparameters.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcrtpreceiver.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcrtpsender.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcrtptransceiver.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcsctptransport.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcsessiondescription.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtp.py -> build\lib.win32-3.6\aiortc
copying aiortc\sdp.py -> build\lib.win32-3.6\aiortc
copying aiortc\stats.py -> build\lib.win32-3.6\aiortc
copying aiortc\utils.py -> build\lib.win32-3.6\aiortc
copying aiortc\__init__.py -> build\lib.win32-3.6\aiortc
creating build\lib.win32-3.6\aiortc\codecs
copying aiortc\codecs\g711.py -> build\lib.win32-3.6\aiortc\codecs
copying aiortc\codecs\h264.py -> build\lib.win32-3.6\aiortc\codecs
copying aiortc\codecs\opus.py -> build\lib.win32-3.6\aiortc\codecs
copying aiortc\codecs\vpx.py -> build\lib.win32-3.6\aiortc\codecs
copying aiortc\codecs\__init__.py -> build\lib.win32-3.6\aiortc\codecs
creating build\lib.win32-3.6\aiortc\contrib
copying aiortc\contrib\media.py -> build\lib.win32-3.6\aiortc\contrib
copying aiortc\contrib\signaling.py -> build\lib.win32-3.6\aiortc\contrib
copying aiortc\contrib\__init__.py -> build\lib.win32-3.6\aiortc\contrib
running build_ext
generating cffi module 'build\\temp.win32-3.6\\Release\\aiortc.codecs._vpx.c'
creating build\temp.win32-3.6
creating build\temp.win32-3.6\Release
generating cffi module 'build\\temp.win32-3.6\\Release\\aiortc.codecs._opus.c'
building 'aiortc.codecs._opus' extension
creating build\temp.win32-3.6\Release\build
creating build\temp.win32-3.6\Release\build\temp.win32-3.6
creating build\temp.win32-3.6\Release\build\temp.win32-3.6\Release
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\bin\HostX86\x86\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\python36-32\include -Ic:\python36-32\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt" /Tcbuild\temp.win32-3.6\Release\aiortc.codecs._opus.c /Fobuild\temp.win32-3.6\Release\build\temp.win32-3.6\Release\aiortc.codecs._opus.obj
aiortc.codecs._opus.c
build\temp.win32-3.6\Release\aiortc.codecs._opus.c(493): fatal error C1083: Cannot open include file: 'opus/opus.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX86\\x86\\cl.exe' failed with exit status 2
----------------------------------------
Failed building wheel for aiortc
Running setup.py clean for aiortc
Running setup.py bdist_wheel for av ... error
Complete output from command c:\python36-32\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\FRUITF~1\\AppData\\Local\\Temp\\pip-install-5l34ichq\\av\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\FRUITF~1\AppData\Local\Temp\pip-wheel-yn9ebcj9 --python-tag cp36:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win32-3.6
creating build\lib.win32-3.6\av
copying av\datasets.py -> build\lib.win32-3.6\av
copying av\deprecation.py -> build\lib.win32-3.6\av
copying av\__init__.py -> build\lib.win32-3.6\av
copying av\__main__.py -> build\lib.win32-3.6\av
creating build\lib.win32-3.6\scratchpad
copying scratchpad\audio.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\audio_player.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\average.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\cctx_decode.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\cctx_encode.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\decode.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\decode_threads.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\dump_format.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\encode.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\encode_frames.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\experimental.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\filmstrip.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\frame_seek_example.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\glproxy.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\graph.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\merge-filmstrip.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\player.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\qtproxy.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\remux.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\resource_use.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\save_subtitles.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\second_seek_example.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\seekmany.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\show_frames_opencv.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\__init__.py -> build\lib.win32-3.6\scratchpad
creating build\lib.win32-3.6\av\audio
copying av\audio\__init__.py -> build\lib.win32-3.6\av\audio
creating build\lib.win32-3.6\av\codec
copying av\codec\__init__.py -> build\lib.win32-3.6\av\codec
creating build\lib.win32-3.6\av\container
copying av\container\__init__.py -> build\lib.win32-3.6\av\container
creating build\lib.win32-3.6\av\data
copying av\data\__init__.py -> build\lib.win32-3.6\av\data
creating build\lib.win32-3.6\av\filter
copying av\filter\__init__.py -> build\lib.win32-3.6\av\filter
creating build\lib.win32-3.6\av\subtitles
copying av\subtitles\__init__.py -> build\lib.win32-3.6\av\subtitles
creating build\lib.win32-3.6\av\video
copying av\video\__init__.py -> build\lib.win32-3.6\av\video
running build_ext
running config
writing build\temp.win32-3.6\Release\include\pyav\config.h
running cythonize
building 'av.buffer' extension
creating build\temp.win32-3.6\Release\src
creating build\temp.win32-3.6\Release\src\av
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\bin\HostX86\x86\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Iinclude -Ic:\python36-32\include -Ibuild\temp.win32-3.6\Release\include -Ic:\python36-32\include -Ic:\python36-32\include -Ibuild\temp.win32-3.6\Release\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt" /Tcsrc\av\buffer.c /Fobuild\temp.win32-3.6\Release\src\av\buffer.obj
buffer.c
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\bin\HostX86\x86\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\python36-32\libs /LIBPATH:c:\python36-32\PCbuild\win32 /LIBPATH:c:\python36-32\libs /LIBPATH:c:\python36-32\PCbuild\win32 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\ATLMFC\lib\x86" "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\lib\x86" "/LIBPATH:C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\lib\um\x86" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.17763.0\ucrt\x86" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.17763.0\um\x86" avformat.lib avcodec.lib swresample.lib swscale.lib avdevice.lib avfilter.lib avutil.lib /EXPORT:PyInit_buffer build\temp.win32-3.6\Release\src\av\buffer.obj /OUT:build\lib.win32-3.6\av\buffer.cp36-win32.pyd /IMPLIB:build\temp.win32-3.6\Release\src\av\buffer.cp36-win32.lib /OPT:NOREF
LINK : fatal error LNK1181: cannot open input file 'avformat.lib'
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX86\\x86\\link.exe' failed with exit status 1181
----------------------------------------
Failed building wheel for av
Running setup.py clean for av
Failed to build aiortc av
Installing collected packages: av, crc32c, asn1crypto, cryptography, pyee, pylibsrtp, pyopenssl, aiortc, opencv-python
Running setup.py install for av ... error
Complete output from command c:\python36-32\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\FRUITF~1\\AppData\\Local\\Temp\\pip-install-5l34ichq\\av\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\FRUITF~1\AppData\Local\Temp\pip-record-foc3l784\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win32-3.6
creating build\lib.win32-3.6\av
copying av\datasets.py -> build\lib.win32-3.6\av
copying av\deprecation.py -> build\lib.win32-3.6\av
copying av\__init__.py -> build\lib.win32-3.6\av
copying av\__main__.py -> build\lib.win32-3.6\av
creating build\lib.win32-3.6\scratchpad
copying scratchpad\audio.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\audio_player.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\average.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\cctx_decode.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\cctx_encode.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\decode.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\decode_threads.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\dump_format.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\encode.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\encode_frames.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\experimental.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\filmstrip.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\frame_seek_example.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\glproxy.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\graph.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\merge-filmstrip.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\player.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\qtproxy.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\remux.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\resource_use.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\save_subtitles.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\second_seek_example.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\seekmany.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\show_frames_opencv.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\__init__.py -> build\lib.win32-3.6\scratchpad
creating build\lib.win32-3.6\av\audio
copying av\audio\__init__.py -> build\lib.win32-3.6\av\audio
creating build\lib.win32-3.6\av\codec
copying av\codec\__init__.py -> build\lib.win32-3.6\av\codec
creating build\lib.win32-3.6\av\container
copying av\container\__init__.py -> build\lib.win32-3.6\av\container
creating build\lib.win32-3.6\av\data
copying av\data\__init__.py -> build\lib.win32-3.6\av\data
creating build\lib.win32-3.6\av\filter
copying av\filter\__init__.py -> build\lib.win32-3.6\av\filter
creating build\lib.win32-3.6\av\subtitles
copying av\subtitles\__init__.py -> build\lib.win32-3.6\av\subtitles
creating build\lib.win32-3.6\av\video
copying av\video\__init__.py -> build\lib.win32-3.6\av\video
running build_ext
running config
writing build\temp.win32-3.6\Release\include\pyav\config.h
running cythonize
building 'av.buffer' extension
creating build\temp.win32-3.6\Release\src
creating build\temp.win32-3.6\Release\src\av
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\bin\HostX86\x86\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Ic:\python36-32\include -Ibuild\temp.win32-3.6\Release\include -Iinclude -Ic:\python36-32\include -Ic:\python36-32\include -Ibuild\temp.win32-3.6\Release\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt" /Tcsrc\av\buffer.c /Fobuild\temp.win32-3.6\Release\src\av\buffer.obj
buffer.c
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\bin\HostX86\x86\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\python36-32\PCbuild\win32 /LIBPATH:c:\python36-32\libs /LIBPATH:c:\python36-32\libs /LIBPATH:c:\python36-32\PCbuild\win32 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\ATLMFC\lib\x86" "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\lib\x86" "/LIBPATH:C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\lib\um\x86" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.17763.0\ucrt\x86" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.17763.0\um\x86" avformat.lib avfilter.lib avutil.lib swresample.lib avcodec.lib swscale.lib avdevice.lib /EXPORT:PyInit_buffer build\temp.win32-3.6\Release\src\av\buffer.obj /OUT:build\lib.win32-3.6\av\buffer.cp36-win32.pyd /IMPLIB:build\temp.win32-3.6\Release\src\av\buffer.cp36-win32.lib /OPT:NOREF
LINK : fatal error LNK1181: cannot open input file 'avformat.lib'
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX86\\x86\\link.exe' failed with exit status 1181
----------------------------------------
Command "c:\python36-32\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\FRUITF~1\\AppData\\Local\\Temp\\pip-install-5l34ichq\\av\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\FRUITF~1\AppData\Local\Temp\pip-record-foc3l784\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\FRUITF~1\AppData\Local\Temp\pip-install-5l34ichq\av\
| 2hard
|
Title: Query are sending INSERT request with back_propagate
Body: I am not sure this is a bug or not, but it seems strange that launching a query would send an INSERT request.
Here is a repository to replicate the bug. https://github.com/Noezor/example_flask_sqlalchemy_bug/
### Expected Behavior
For the models :
```python
from config import db
class Parent(db.Model):
__tablename__ = "parent"
id = db.Column(db.Integer(), primary_key = True)
name = db.Column(db.String, unique = True)
children = db.relationship("Child", back_populates="parent")
class Child(db.Model):
__tablename__ = "child"
id = db.Column(db.Integer(), primary_key = True)
name = db.Column(db.String(32), unique = True)
parent_id = db.Column(db.Integer, db.ForeignKey("parent.id"))
parent = db.relationship("Parent", back_populates="children")
```
And now the testscript.
```python
from config import db
from model import Child, Parent
parent = Parent(name='John')
if not Parent.query.filter(Parent.name == parent.name).one_or_none():
db.session.add(parent)
db.session.commit()
else :
parent = Parent.query.filter(Parent.name == parent.name).one_or_none()
child1 = Child(name="Toto",parent = parent)
if not Child.query.filter(Child.name == "Toto").one_or_none() :
db.session.add(child1)
db.session.commit()
else :
child1 = Child.query.filter(Child.name == "Toto").one_or_none()
print("success")
```
At first launch, the problem should work fine. At second launch, once the database is populated, there should not be problem either as the query will detect that the database already contains added elements.
### Actual Behavior
At first launch, everything is working fine. On the other hand, at second launch, it seems that line "if not Child.query.filter(Child.name == "Toto").one_or_none() :" is sending an INSERT request.
```pytb
2020-02-01 15:23:28,552 INFO sqlalchemy.engine.base.Engine SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1
2020-02-01 15:23:28,552 INFO sqlalchemy.engine.base.Engine ()
2020-02-01 15:23:28,553 INFO sqlalchemy.engine.base.Engine SELECT CAST('test unicode returns' AS VARCHAR(60)) AS anon_1
2020-02-01 15:23:28,554 INFO sqlalchemy.engine.base.Engine ()
2020-02-01 15:23:28,555 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2020-02-01 15:23:28,556 INFO sqlalchemy.engine.base.Engine SELECT parent.id AS parent_id, parent.name AS parent_name
FROM parent
WHERE parent.name = ?
2020-02-01 15:23:28,557 INFO sqlalchemy.engine.base.Engine ('John',)
2020-02-01 15:23:28,560 INFO sqlalchemy.engine.base.Engine SELECT parent.id AS parent_id, parent.name AS parent_name
FROM parent
WHERE parent.name = ?
2020-02-01 15:23:28,560 INFO sqlalchemy.engine.base.Engine ('John',)
**2020-02-01 15:23:28,567 INFO sqlalchemy.engine.base.Engine INSERT INTO child (name, parent_id) VALUES (?, ?)**
2020-02-01 15:23:28,568 INFO sqlalchemy.engine.base.Engine ('Toto', 1)
2020-02-01 15:23:28,569 INFO sqlalchemy.engine.base.Engine ROLLBACK
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
sqlite3.IntegrityError: UNIQUE constraint failed: child.name
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/pionn/.vscode-insiders/extensions/ms-python.python-2020.1.58038/pythonFiles/ptvsd_launcher.py", line 43, in <module>
main(ptvsdArgs)
File "/home/pionn/.vscode-insiders/extensions/ms-python.python-2020.1.58038/pythonFiles/lib/python/old_ptvsd/ptvsd/__main__.py", line 432, in main
run()
File "/home/pionn/.vscode-insiders/extensions/ms-python.python-2020.1.58038/pythonFiles/lib/python/old_ptvsd/ptvsd/__main__.py", line 316, in run_file
runpy.run_path(target, run_name='__main__')
File "/usr/lib/python3.6/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
**File "/home/pionn/minimum_bug_sqlalchemy/test.py", line 13, in <module>
if not Child.query.filter(Child.name == "Toto").one_or_none() :**
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/query.py", line 2784, in one_or_none
ret = list(self)
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/query.py", line 2854, in __iter__
self.session._autoflush()
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 1407, in _autoflush
util.raise_from_cause(e)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 187, in reraise
raise value
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 1397, in _autoflush
self.flush()
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 2171, in flush
self._flush(objects)
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 2291, in _flush
transaction.rollback(_capture_exception=True)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 187, in reraise
raise value
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 2255, in _flush
flush_context.execute()
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/unitofwork.py", line 389, in execute
rec.execute(self)
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/unitofwork.py", line 548, in execute
uow
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/persistence.py", line 181, in save_obj
mapper, table, insert)
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/persistence.py", line 835, in _emit_insert_statements
execute(statement, params)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 945, in execute
return meth(self, multiparams, params)
File "/usr/lib/python3/dist-packages/sqlalchemy/sql/elements.py", line 263, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1053, in _execute_clauseelement
compiled_sql, distilled_params
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context
context)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1402, in _handle_dbapi_exception
exc_info
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 186, in reraise
raise value.with_traceback(tb)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (raised as a result of Query-invoked autoflush; consider using a session.no_autoflush block if this flush is occurring prematurely) (sqlite3.IntegrityError) UNIQUE constraint failed: child.name [SQL: 'INSERT INTO child (name, parent_id) VALUES (?, ?)'] [parameters: ('Toto', 1)]
```
I believe it happends through back_propagate as, if removed, the "bug" dissapears. Same if I don't specify a parent for the child.
### Environment
* Operating system: Ubuntu 18.14
* Python version: 3.6.3
* Flask-SQLAlchemy version: 2.4.1
* SQLAlchemy version: 1.3.12
| 1medium
|
Title: Leaks memory when input is not a numpy array
Body: If you run the following program you see that `nansum` leaks all the memory it are given when passed a Pandas object. If it is passed the ndarray underlying the Pandas object instead then there is no leak:
```
import psutil
import gc
def f():
x = np.zeros(10*1024*1024, dtype='f4')
# Leaks 40MB/iteration
bottleneck.nansum(pd.Series(x))
# No leak:
#bottleneck.nansum(x)
process = psutil.Process(os.getpid())
def _get_usage():
gc.collect()
return process.memory_info().private / (1024*1024)
last_usage = _get_usage()
print(last_usage)
for _ in range(10):
f()
usage = _get_usage()
print(usage - last_usage)
last_usage = usage
```
This affects not just `nansum`, but apparently all the reduction functions (with or without `axis` specified), and at least some other functions like `move_max`.
I'm not completely sure why this happens, but maybe it's because `PyArray_FROM_O` is allocating a new array in this case, and the ref count of that is not being decremented by anyone? https://github.com/kwgoodman/bottleneck/blob/master/bottleneck/src/reduce_template.c#L1237
I'm using Bottleneck 1.2.1 with Pandas 0.23.1. `sys.version` is `3.6.1 (v3.6.1:69c0db5, Mar 21 2017, 18:41:36) [MSC v.1900 64 bit (AMD64)]`. | 2hard
|
Title: error: (-215:Assertion failed) !buf.empty() in function 'imdecode_'
Body: First I met the error as follow๏ผ

Then I add int at โqualityโ as [https://github.com/TencentARC/GFPGAN/issues/93](url)
But I got another error as follow:

| 1medium
|
Title: Disable `DiscreteNorm` by default for `imshow` plots?
Body: <!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). -->
### Description
In all the examples available for this package and in my tests, the colorbar maximum number of bins is fixed to 10. There is doesn't seem to by any to changing this
### Steps to reproduce
A "[Minimal, Complete and Verifiable Example](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports)" will make it much easier for maintainers to help you.
```python
fig, ax = proplot.subplots()
pc = plt.imshow(p.image, extent=[file.bounds[0], file.bounds[1], file.bounds[2], file.bounds[3]], cmap=my_cmap, vmin=0.5, vmax=1.5)
plt.colorbar(pc, ax=ax)
ax.set_xlim([0, 1])
ax.set_ylim([0, 1])
ax.set_xlabel(r"$x/R$")
ax.set_ylabel(r"$y/R$")
plt.show()
```
**Actual behavior**: [What actually happened]

### Equivalent steps in matplotlib
Please make sure this bug is related to a specific proplot feature. If you're not sure, try to replicate it with the [native matplotlib API](https://matplotlib.org/3.1.1/api/index.html). Matplotlib bugs belong on the [matplotlib github page](https://github.com/matplotlib/matplotlib).
```python
fig, ax = plt.subplots()
pc = plt.imshow(p.image, extent=[file.bounds[0], file.bounds[1], file.bounds[2], file.bounds[3]], cmap=my_cmap, vmin=0.5, vmax=1.5)
plt.colorbar(pc, ax=ax)
ax.set_xlim([0, 1])
ax.set_ylim([0, 1])
ax.set_xlabel(r"$x/R$")
ax.set_ylabel(r"$y/R$")
plt.show()
```

| 1medium
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.