text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Provide custom browser profile location for cookie extractor ? (WSL)
Body: ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
Hello everybody,
With the recent fuck-up around Youtube and the need to use browser cookies, I'm facing a problem on my Windows setup : I'm using yt-dlp from WSL2, but my browser profile is on Windows. The process to get cookies manually is just utter crap, and I wanted to know if there is a way to provide the profile location for the --cookies-from-browser extractor so it can deal with it by itself, like it's doing on my Fedora-powered laptop (and on any non-splitted and standard OS/Browser installation).
So far I've found nothing in the various pages/documentations about it (or maybe I've become really bad at finding information about that). Is there such option or could it be implemented ? (I confirm this is not a feature request at this stage)
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
``` | 1medium
|
Title: [ENH] Improve `_check_params` method in `_k_means.py` and `_k_medoids.py`
Body: ### Describe the feature or idea you want to propose
Right now there is no `else` condition in the `_check_params` function if the self.init is a string.
```python
def _check_params(self, X: np.ndarray) -> None:
self._random_state = check_random_state(self.random_state)
if isinstance(self.init, str):
if self.init == "random":
self._init = self._random_center_initializer
elif self.init == "kmedoids++":
self._init = self._kmedoids_plus_plus_center_initializer
elif self.init == "first":
self._init = self._first_center_initializer
elif self.init == "build":
self._init = self._pam_build_center_initializer
else:
# other code
```
So when we pass a string other than the expected strings it leads to an error which is not very user friendly.
Would be nice to let the user know what the cause of the error is.
Also noticed that the **build** initialization doesn't have any documentation. Would be good to at that as well?
### Describe your proposed solution
add an else statement
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
_No response_ | 0easy
|
Title: [AWS] Fail to detect cuda using H100 and default image
Body: A user reported that CUDA is fail to be detected by torch using `p5.48xlarge` and our default AMI, but it works with official ubuntu pytorch deep learning AMI. | 1medium
|
Title: [FEATURE] FAN support
Body: **Is your feature request related to a problem? Please describe.**
Models that are invariant under non-differentiable impairments like JPEG compression, video compression (H265, VP9, etc), etc.
**Describe the solution you'd like**
[This paper](https://arxiv.org/pdf/2204.12451.pdf) suggests they've solved this with ViT-like models and enhanced attention, called [FAN](https://github.com/nvlabs/fan)
**Describe alternatives you've considered**
Trained with differentiable approximations of non-differentiable impairments. Works a bit but not great.
| 2hard
|
Title: NLLB License
Body: ## ❓ Questions and Help
Here is the NLLB model's license, https://github.com/facebookresearch/fairseq/blob/nllb/LICENSE.model.md
Can we use NLLB model output (translation from language X to language Y) to train a model and release that model under a Commercially permitted license (i.e., Apache 2.0)? I understand the model license is for `Attribution-NonCommercial 4.0 International` but to generate the model output, we are actually paying for the compute hours. I understand that we cannot use model weights for commercial purposes. But what about the output generated by the model?
| 3misc
|
Title: Docs: climpy -> climopy in author bios
Body: I was browsing and noticed that climpy name was changed (https://github.com/lukelbd/climopy/commit/f4e502dbda176e31f149018593a734c21ed5319a) but not in its mention [here](https://github.com/lukelbd/proplot/blob/master/docs/authors.rst). The link does still work. | 0easy
|
Title: [Feature] txtai as a proxy like nboost
Body: Hi,
Hope you are all well !
I was wondering if we can use **txtai** like [nboost](https://github.com/koursaros-ai/nboost) as a proxy for elasticsearch or [manticoresearch](https://github.com/manticoresoftware/manticoresearch) ?
i am really interested by an integration to manticoresearch as I wrote https://paper2code.com around this full-text search engine.
Thanks for your insights and inputs about this question.
Cheers,
X | 1medium
|
Title: Wrong heatmap for Swin-Transformer
Body: My code:
`if __name__ == '__main__':
""" python swinT_example.py -image-path <path_to_image>
Example usage of using cam-methods on a SwinTransformers network.
"""
label_map_path = '../../imagenet1000_clsidx_to_labels.txt'
with open(label_map_path, 'r') as file:
label = eval(file.read())
args = get_args()
methods = \
{"gradcam": GradCAM,
"scorecam": ScoreCAM,
"gradcam++": GradCAMPlusPlus,
"ablationcam": AblationCAM,
"xgradcam": XGradCAM,
"eigencam": EigenCAM,
"eigengradcam": EigenGradCAM}
if args.method not in list(methods.keys()):
raise Exception(f"method should be one of {list(methods.keys())}")
model = timm.create_model('swin_tiny_patch4_window7_224',
pretrained=True
)
model.eval()
if args.use_cuda:
model = model.cuda()
target_layer = model.layers[-1].blocks[-2].norm1
if args.method not in methods:
raise Exception(f"Method {args.method} not implemented")
cam = methods[args.method](model=model,
target_layer=target_layer,
use_cuda=args.use_cuda,
reshape_transform=reshape_transform)
rgb_img = cv2.imread(args.image_path, 1)[:, :, ::-1]
rgb_img = cv2.resize(rgb_img, (224, 224))
rgb_img = np.float32(rgb_img) / 255
input_tensor = preprocess_image(rgb_img, mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
# If None, returns the map for the highest scoring category.
# Otherwise, targets the requested category.
target_category = None
# AblationCAM and ScoreCAM have batched implementations.
# You can override the internal batch size for faster computation.
cam.batch_size = 32
print(input_tensor.shape)
grayscale_cam = cam(input_tensor=input_tensor,
target_category=target_category,
eigen_smooth=args.eigen_smooth,
aug_smooth=args.aug_smooth)
input_tensor = input_tensor.cuda()
output = model(input_tensor)
output = nn.Softmax(dim=1)(output)
sorted,ind = torch.sort(output)
res_ind = ind[0][-1].item()
# Here grayscale_cam has only one image in the batch
grayscale_cam = grayscale_cam[0, :]
cam_image = show_cam_on_image(rgb_img, grayscale_cam)
print(label[res_ind])
cv2.imwrite(f'{args.method}_cam_swin.jpg', cam_image)`
My heatmap:


Is there something wrong with my code?
| 1medium
|
Title: [FEATURE] ImageNet1k weights for ViT Huge?
Body: Looking through `timm/models/vision_transformer.py`, the only huge variants fine-tuned on ImageNet1K are pretrained by the CLIP loss. Are the normal variants (`augreg2` etc.) not available for the bigger sizes such as Huge and Giant? | 1medium
|
Title: Ability to use custom StrawberryDjangoField class for relay connections and nodes
Body: We can pass a custom StrawberryDjangoField via `field_cls` parameter to types but not to relay connections and nodes.
Is it possible to add this ? | 1medium
|
Title: In August 2025 or later - drop support for k8s 1.30
Body: This is an issue to help us remember the policy (see #2591, #2979, #3040, #3042, #3309, #3320, #3321, #3509, #3510, #3511) to allow ourselves to drop support for k8s versions as long as major cloud providers has stopped supporting them.
k8s | [GKE EOL](https://endoflife.date/google-kubernetes-engine) | [EKS EOL](https://endoflife.date/amazon-eks) | [AKS EOL](https://docs.microsoft.com/en-gb/azure/aks/supported-kubernetes-versions?tabs=azure-cli#aks-kubernetes-release-calendar) | Comment
-|-|-|-|-
1.30 | 31 Jul 2025 | 23 Jul 2025 | Jul 2025 | We can drop support ~August 2025 | 1medium
|
Title: Update `BaseGroupApprovalTaskStateEmailNotifier` to check for `AbstractGroupApprovalTask` instances
Body: #11653 added `AbstractGroupApprovalTask` and `GroupApprovalTask` inherits from it.
howevere the `BaseGroupApprovalTaskStateEmailNotifier` still checks `GroupApprovalTask` - https://github.com/wagtail/wagtail/blob/6affa04d320f50b6a3babd8afc1a590e02e84a5d/wagtail/admin/mail.py#L368-L373
This means any custom group tasks derived from `AbstractGroupApprovalTask`s require adding custom notifiers and wiring in a signal handler
| 1medium
|
Title: [cluster/gce] Missing --project while adding kubeconfig metadata
Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via https://kubernetes.io/security/
-->
#### What happened:
kube-up tries to add metadata to the wrong project.
https://testgrid.k8s.io/conformance-all#Conformance%20-%20GCE%20-%20master%20-%20kubetest2
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-gce-conformance-latest-kubetest2/1366974682632294400
```
ERROR: (gcloud.compute.instances.add-metadata) Could not fetch resource:
- Required 'compute.instances.get' permission for 'projects/k8s-infra-prow-build/zones/us-central1-b/instances/kt2-3a058694-7bdc-master'
```
It is trying to add metadata to the prow project running the tests instead of the target project.
This is because
https://github.com/kubernetes/kubernetes/blob/716a0547206a71ecc6e53f8b970728bc85063a60/cluster/gce/util.sh#L2885
is missing a `--project=${PROJECT}`
#### What you expected to happen:
kube-up uses the target project to add metadata
#### How to reproduce it (as minimally and precisely as possible):
Don't set/set a different default project and run cluster/kube-up.
#### Anything else we need to know?:
originally added in https://github.com/kubernetes/kubernetes/pull/77930
kubetest hides the project acquired from boskos under the covers by setting it as the default project
https://github.com/kubernetes/test-infra/blob/81be880b02cc25b7bf77f2ecb9607d0b625460e3/kubetest/main.go#L770
uncovered as part of migration to kubetest2
xref: https://github.com/kubernetes/enhancements/issues/2464
/cc @BenTheElder @spiffxp @mm4tt | 1medium
|
Title: Pull GeoPoint wrapper ValuesSources up to AggregationBuilder layer
Body: Currently, we have two ValuesSources, `CellIdSource` and `DistanceSource`, which we use to wrap a GeoPoint values source and treat it as a numeric values source. These get created very late in the process however, specifically in the aggregation factory or supplier. This creates a problem where we have a ValuesSourceConfig which no longer describes the values source we actually have. I'm trying to move us more towards using ValuesSourceConfig as the source of truth for all things Values Source, and this conflicts with that plan.
One solution we kicked around would be to create custom resolvers for these aggregations (i.e overriding `ValuesSourceAggregationBuilder#resolveConfig`) to construct a values source config that reflects the actual values source. Relevant information, such as the resolution for Cell Id Source, would be stored on the ValuesSource directly. This would probably require a new values source type as well. | 1medium
|
Title: bar df data type issues when converting
Body: In Entity.py,
You are converting the timestamp by multiply 1e9 and this cause the datatype to change to Float64 which is not accepted in the panda library and returns TypeError.
Can we do something like this?
```
if not df.empty:
df.index = pd.to_datetime(
(df.index * 1e9).astype('int64'), utc=True,
).tz_convert(NY)
```
File "/Users/jiawensun/Alpaca/samplealgo/algo.py", line 59, in _get_prices
return barset.df
File "/usr/local/lib/python3.7/site-packages/alpaca_trade_api/entity.py", line 107, in df
df = bars.df.copy()
File "/usr/local/lib/python3.7/site-packages/alpaca_trade_api/entity.py", line 85, in df
df.index * 1e9, utc=True,
File "/usr/local/lib/python3.7/site-packages/pandas/core/tools/datetimes.py", line 603, in to_datetime
result = convert_listlike(arg, box, format)
File "/usr/local/lib/python3.7/site-packages/pandas/core/tools/datetimes.py", line 223, in _convert_listlike_datetimes
arg, _ = maybe_convert_dtype(arg, copy=False)
File "/usr/local/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py", line 1914, in maybe_convert_dtype
data = data.astype(_NS_DTYPE)
File "/usr/local/lib/python3.7/site-packages/pandas/core/indexes/numeric.py", line 330, in astype
raise TypeError(msg)
TypeError: Cannot convert Float64Index to dtype datetime64[ns]; integer values are required for conversion | 1medium
|
Title: 建议增加MiddlePressMouse、MiddleReleaseMouse、MiddleDragDrop方法
Body: 建议大佬,审阅增加以下封装方法,多谢啦!
uiautomation line1916 增加鼠标中键按压释放方法MiddlePressMouse、MiddleReleaseMouse
line2010增加拖移中键方法MiddleDragDrop
def MiddlePressMouse(x: int, y: int, waitTime: float = OPERATION_WAIT_TIME) -> None:
"""
Press middle mouse.
x: int.
y: int.
waitTime: float.
"""
SetCursorPos(x, y)
screenWidth, screenHeight = GetScreenSize()
mouse_event(MouseEventFlag.MiddleDown | MouseEventFlag.Absolute, x * 65535 // screenWidth, y * 65535 // screenHeight, 0, 0)
time.sleep(waitTime)
def MiddleReleaseMouse(waitTime: float = OPERATION_WAIT_TIME) -> None:
"""
Release middle mouse.
waitTime: float.
"""
x, y = GetCursorPos()
screenWidth, screenHeight = GetScreenSize()
mouse_event(MouseEventFlag.MiddleUp | MouseEventFlag.Absolute, x * 65535 // screenWidth, y * 65535 // screenHeight, 0, 0)
time.sleep(waitTime)
def MiddleDragDrop(x1: int, y1: int, x2: int, y2: int, moveSpeed: float = 1, waitTime: float = OPERATION_WAIT_TIME) -> None:
"""
Simulate mouse middle button drag from point x1, y1 drop to point x2, y2.
x1: int.
y1: int.
x2: int.
y2: int.
moveSpeed: float, 1 normal speed, < 1 move slower, > 1 move faster.
waitTime: float.
"""
MiddlePressMouse(x1, y1, 0.05)
MoveTo(x2, y2, moveSpeed, 0.05)
MiddleReleaseMouse(waitTime)
@yinkaisheng
| 0easy
|
Title: Add `compose` to the names of docker compose files
Body: ## Description
`local.yml`, `production.yml`, and `docs.yml` should be renamed something with "compose" in it: I propose `docker-compose.*`
## Rationale
The VSCode Docker extension needs "compose" in the filename to detect a docker compose file. This lets you right click on the file to run compose commands using it. Also, it puts the files next to each other alphabetically, and perhaps most importantly, more clearly communicates the purpose of the files. | 0easy
|
Title: Improve Providers extending
Body: At the moment, every extended provider have to implement override logic:
``` python
class Extended(Provider):
def __call__(self, *args, **kwargs):
"""Return provided instance."""
if self.overridden:
return self.last_overriding(*args, **kwargs)
```
Need to improve provider extending process.
| 1medium
|
Title: Can I use the C++ to connect the bert-serving-server?
Body: **Prerequisites**
* [x] Are you running the latest `bert-as-service`?
* [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
**Question**
I have finished the training of the text classification using Python and export the symbols and parameters of model.
Then I want to use C++ to predict the classification, and find there is no C++ API to connect the the `bert-serving-server`. The way I think of is to use `bert-as-service` to serve HTTP requests in Json and then use the C++ to call the sever via HTTP POST request. Is there any better idea?
| 1medium
|
Title: [BUG] Console configuration for RichHandler with logging.config.DictConfig does not work
Body: - [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
The [dictConfig](https://docs.python.org/3/library/logging.config.html#logging.config.dictConfig) method from logging.config module seems to not accept a console configuration when using the [RichHandler](https://rich.readthedocs.io/en/stable/logging.html).
I want to configure a console to control the color system of the output, however the console configuration does not seem to be supported with a dictionary configuration for the logger.
Here is an extract of the dict configuration I want to have for a logger named `ralph` and I can't find a proper definition for the console in the handler dictionary configuration.
```
"handlers": {
"console": {
"class": "rich.logging.RichHandler",
"level": "INFO",
"console": {
"class": "rich.console.Console",
"color_system": "standard",
"file": "file.txt"
}
}
},
"loggers": {
"ralph": {
"handlers": ["console"],
"level": "INFO",
}
}
```
When running the traceback is, as followed.
```
File "/usr/local/lib/python3.9/logging/__init__.py", line 1446, in info
self._log(INFO, msg, args, **kwargs)
File "/usr/local/lib/python3.9/logging/__init__.py", line 1589, in _log
self.handle(record)
File "/usr/local/lib/python3.9/logging/__init__.py", line 1599, in handle
self.callHandlers(record)
File "/usr/local/lib/python3.9/logging/__init__.py", line 1661, in callHandlers
hdlr.handle(record)
File "/usr/local/lib/python3.9/logging/__init__.py", line 952, in handle
self.emit(record)
File "/usr/local/lib/python3.9/site-packages/rich/logging.py", line 163, in emit
if isinstance(self.console.file, NullFile):
AttributeError: 'ConvertingDict' object has no attribute 'file'
```
I can't find documentation about this topic.
What I conclude is that it is required to instantiate a console to get the configuration in the handler. The `RichHandler` seems to be only usable without having a defined console but only with stdout as the default console.
| 1medium
|
Title: Document installation via conda-forge
Body: As explained [here](https://github.com/automl/auto-sklearn/issues/1131#issuecomment-823423641), Auto-sklearn can be installed via conda-forge. This should be reflected in the installation instruction. | 0easy
|
Title: Narrow the API gap between AnyIO and Trio
Body: I have received several complaints about the AnyIO's and Trio's APIs differing (to no one's benefit).
I created this issue to discuss which parts of the API could reasonably be changed to match Trio's.
Some of these changes may require backwards incompatible changes.
I have tried to make a list of differences below but feel free to point out any omissions.
The functions which are fully equivalent and could be renamed or aliased without issues:
* `anyio.aopen()` vs `trio.open_file()`
* `anyio.receive_signals()` vs `trio.open_signal_receiver()`
* `anyio.run_in_thread()` vs `trio.run_sync_in_worker_thread()`
* `anyio.current_default_thread_limiter()` vs `trio.current_default_worker_thread_limiter()`
The API functions below have issues that need to be discussed.
`trio.CancelScope` vs `anyio.open_cancel_scope()`
--------------------------------------------------------------------
Trio itself changed this during after AnyIO's release. Trio used to have an `open_cancel_scope()` function. Moving this to a class would probably require overriding `__new__()` to return an instance of a backend specific implementation of this.
`trio.open_nursery()` vs `anyio.create_task_group()`
---------------------------------------------------------------------
I personally find the term "nursery" unintuitive, and I have not run into it anywhere else. Furthermore, it puzzles me that, although the function is synchronous, it hasn't been replaced by a `NurseryManager` constructor, like was done with `CancelScope`.
`trio.MultiError` vs `anyio.ExceptionGroup`
--------------------------------------------------------
The long term plan is to unify this under the exceptiongroup library. This task is tracked in issue #17.
`trio.from_thread.run()` vs `anyio.run_async_from_thread()`
-------------------------------------------------------------------------------
I have a couple problems with this:
1. Why did this require a separate module?
2. Why are there separate `run()` and `run_sync()` methods? Couldn't `run()` just call the function and, if the return value is a coroutine, await on it?
Additionally, AnyIO does not currently support entering the event loop from outside of worker threads spawned from the event loop thread, mostly due to limitations imposed by Curio which does not support this. I will have to find a good solution to this.
Trio Channels vs AnyIO Streams
--------------------------------------------
This design issue is still [open](https://github.com/python-trio/trio/issues/959) in Trio's own issue tracker. | 2hard
|
Title: [FR] able to change class name of a bounding box and edit a bounding box
Body: ### Proposal Summary
currently, in order to change the class name of a bounding box, user need to launch an annotation tool like CVAT. this is cumbersome.
beside this, it's create if we can edit a bounding box in fiftyone
### Motivation
- What is the use case for this feature? easier change of annotation
- Why is this use case valuable to support for FiftyOne users in general? ease of use for bounding box annotation
- Why is this use case valuable to support for your project(s) or organization? bounding box annotation
- Why is it currently difficult to achieve this use case? need to always load to CVAT and it's cumbersome
### What areas of FiftyOne does this feature affect?
- [ x ] App: FiftyOne application
- [ ] Core: Core `fiftyone` Python library
- [ ] Server: FiftyOne server
### Details
P1: able to change class name of bounding box in Fiftyone
P2: able to edit bounding box coordinate if bounding box is in accurate
P3: able to create / delete bbox
### Willingness to contribute
The FiftyOne Community welcomes contributions! Would you or another member of your organization be willing to contribute an implementation of this feature?
- [ ] Yes. I can contribute this feature independently
- [ ] Yes. I would be willing to contribute this feature with guidance from the FiftyOne community
- [x] No. I cannot contribute this feature at this time
| 1medium
|
Title: Mongo fail-over during append can leave a Version in an inconsistent state
Body: We can trigger this assertion:
```
...
File "/app/AHL/packages/ahl.tickdownsample/1.16.0-py2.7/app/AHL/ahl.tickdownsample/lib/python2.7/site-packages/ahl.mongo-1.297.0-py2.7-linux-x86_64.egg/ahl/mongo/mongoose/store/version_store.py", line 105, in append
return super(VersionStore, self).append(symbol, data, metadata, prune_previous_version, upsert, **kwargs)
File "/opt/ahl/app/AHL/packages/ahl.tickdownsample/1.16.0-py2.7/app/AHL/ahl.tickdownsample/lib/python2.7/site-packages/arctic-1.4.0-py2.7-linux-x86_64.egg/arctic/decorators.py", line 50, in f_retry
return f(*args, **kwargs)
File "/opt/ahl/app/AHL/packages/ahl.tickdownsample/1.16.0-py2.7/app/AHL/ahl.tickdownsample/lib/python2.7/site-packages/arctic-1.4.0-py2.7-linux-x86_64.egg/arctic/store/version_store.py", line 449, in append
Append not possible - please call write() to get versions back in sync''')
ArcticException: version_nums is out of sync with previous version document.
This probably means that either a version document write has previously failed, or the previous version has been deleted.
Append not possible - please call write() to get versions back in sync
```
in append, as the `version_nums` are updated before the data is actually written.
| 2hard
|
Title: Data for column "Open" must be ALL float or int.
Body: ```python
!pip install mplfinance
import pandas as pd
import yfinance as yf
import mplfinance as mpf
#get data
symbol = "AAPL"
start_date = "2022-01-01"
end_date = "2022-12-31"
stock_data = yf.download(symbol, start=start_date, end=end_date)
# index
stock_data.index = pd.to_datetime(stock_data.index)
# K_line
mpf.plot(stock_data, type='candle', style='yahoo', title=f'{symbol} ')
```
get the error
```bash
[*********************100%***********************] 1 of 1 completed
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-5-7f9b0b934c33>](https://localhost:8080/#) in <cell line: 17>()
15
16 # K_line
---> 17 mpf.plot(stock_data)
1 frames
[/usr/local/lib/python3.10/dist-packages/mplfinance/_arg_validators.py](https://localhost:8080/#) in _check_and_prepare_data(data, config)
72 for col in cols:
73 if not all( isinstance(v,(float,int)) for v in data[col] ):
---> 74 raise ValueError('Data for column "'+str(col)+'" must be ALL float or int.')
75
76 if config['tz_localize']:
ValueError: Data for column "Open" must be ALL float or int.
```
| 1medium
|
Title: EBS in AWS clusters is all about issues and headaches
Body: Using EBS volumes is way too unstable on k8s aws deployments, we've been using kubernetes since version 1.2, now running a 1.5.2 cluster and the problems have always been there and never solved.
80% of the time I need to delete a pod under a replication controller or deployment that have a PVC linked it, it does not work, the problem varies, sometimes I saw this same message:
```
I0213 16:49:05.844132 6 aws.go:1366] Waiting for volume "vol-a60ecccc" state: actual=attaching, desired=attached
```
repeatedly for hours and some other times this another messages:
```
I0213 16:49:05.844132 6 aws.go:1366] Waiting for volume "vol-a60e9901" state: actual=detached, desired=attached
```
also for hours. The problem is only solved by deleting the pods several times, attaching and detaching the ebs manually through aws console several times, till it decides to work.
The problem is so recurrent that we've now started to use EFS through NFS when we need persistent storage.
I hope this gets fixed in future releases of kubernetes because it is disencouraging our team to run some applications that needs persistent volumes under k8s. | 2hard
|
Title: State of the project — inactive
Body: TL;DR this project is on hold while the Pydantic team is flat out building [logfire](https://pydantic.dev/logfire) and continuing to maintain Pydantic.
We use FastUI internally for the admin interface of Logfire, and it works well for that application, but we don't have a strong impetus to invest heavily in improving FastUI.
---
Some background: I originally built FastUI in the hope we could use it for some user facing functionality in Logfire, that didn't happen for a bunch of reasons:
1. FastUI wasn't ready when we needed to build that stuff
2. There isn't enough repetitive CRUD functionality in Logfire to demand the truely reusable/composable component structure that FastUI tries to provide.
3. There was some complex use of generic unions we needed in Pydantic before arbitrary components would work in FastUI, which slowed us down.
Ultimately I couldn't persuade our frontend team that FastUI was the right tool to use, and I wasn't prepared to force them to use a tool I was still experimenting with.
Also, as per #48 I would really like the rendering of HTML in FastUI to happen exclusively (or mostly) serverside, but that's a big rewrite that I don't have time to work on right now.
I won't mark this repo as archived so discussions can continue, and we can continue to fix any bugs we find. But don't expect new PRs or issues to receive attention from the Pydantic team.
For those who were excited about FastUI, sorry to disappoint. 🙏
Maybe at some point in future I'll have the time and bandwidth to resume the project. | 3misc
|
Title: Purpose of Procrustes Analysis
Body: Hi there,
I am using PHATE on data sets with much success, and I am looking to understand the purpose of the procrustes analysis between the classical MDS embedding and the metric MDS embedding in the embed_mds function. This is not necessarily an issue, but I couldn't find any documentation in the paper "Visualizing structure and transitions in high-dimensional biological data" on the matter.
Thank you!
Josh | 3misc
|
Title: Inappropriate saving of the merged fine tuned llama-2 model
Body: Hi,
I am trying to fine tune the llama-2 model with the help of the following config file->
``` model_type: llm
base_model: /home/ubuntu/llama-2-7b-hf_for_merge
quantization:
bits: 8
adapter:
type: lora
r: 8
dropout: 0.05
target_modules: null
alpha: 16
pretrained_adapter_weights: null
postprocessor:
merge_adapter_into_base_model: true
progressbar: true
bias_type: none
prompt:
template: |
### Instruction:
{Instruction}
### Context:
{Context}
### Response:
input_features:
- name: prompt
type: text
preprocessing:
max_sequence_length: 1024
output_features:
- name: Response
type: text
preprocessing:
max_sequence_length: 512
trainer:
type: finetune
learning_rate: 0.0001
batch_size: 1
max_batch_size: 1
gradient_accumulation_steps: 1
enable_gradient_checkpointing: true
epochs: 3
learning_rate_scheduler:
warmup_fraction: 0.01
preprocessing:
sample_ratio: 1.0
backend:
type: local
```
The fine tuning is successful and I can see that merge and unload process was also completed as shown->
```
Unloading and merging model: 0%| | 0/518 [00:00<?, ?it/s]/opt/conda/envs/ludwig_train_env/lib/python3.10/site-packages/peft/tuners/lora/bnb.py:67: UserWarning: Merge lora module to 8-bit linear may get different generations due to rounding errors.
warnings.warn(
Unloading and merging model: 1%|▏ | 7/518 [00:00<00:07, 66.47it/s]
Unloading and merging model: 4%|▍ | 23/518 [00:00<00:06, 80.72it/s]
Unloading and merging model: 8%|▊ | 39/518 [00:00<00:05, 84.70it/s]
Unloading and merging model: 11%|█ | 55/518 [00:00<00:05, 86.39it/s]
Unloading and merging model: 14%|█▎ | 71/518 [00:00<00:05, 87.25it/s]
Unloading and merging model: 17%|█▋ | 87/518 [00:01<00:04, 87.56it/s]
Unloading and merging model: 20%|█▉ | 103/518 [00:01<00:04, 87.93it/s]
Unloading and merging model: 23%|██▎ | 119/518 [00:01<00:04, 87.98it/s]
Unloading and merging model: 26%|██▌ | 135/518 [00:01<00:04, 88.05it/s]
Unloading and merging model: 29%|██▉ | 151/518 [00:01<00:04, 88.08it/s]
Unloading and merging model: 32%|███▏ | 167/518 [00:01<00:03, 88.15it/s]
Unloading and merging model: 35%|███▌ | 183/518 [00:02<00:03, 87.92it/s]
Unloading and merging model: 38%|███▊ | 199/518 [00:02<00:03, 87.75it/s]
Unloading and merging model: 42%|████▏ | 215/518 [00:02<00:03, 87.50it/s]
Unloading and merging model: 45%|████▍ | 231/518 [00:02<00:03, 87.44it/s]
Unloading and merging model: 48%|████▊ | 247/518 [00:02<00:03, 87.47it/s]
Unloading and merging model: 51%|█████ | 263/518 [00:03<00:02, 87.39it/s]
Unloading and merging model: 54%|█████▍ | 279/518 [00:03<00:02, 87.43it/s]
Unloading and merging model: 57%|█████▋ | 295/518 [00:03<00:02, 87.35it/s]
Unloading and merging model: 60%|██████ | 311/518 [00:03<00:02, 87.30it/s]
Unloading and merging model: 63%|██████▎ | 327/518 [00:03<00:02, 87.31it/s]
Unloading and merging model: 66%|██████▌ | 343/518 [00:03<00:02, 87.48it/s]
Unloading and merging model: 69%|██████▉ | 359/518 [00:04<00:01, 87.61it/s]
Unloading and merging model: 72%|███████▏ | 375/518 [00:04<00:01, 87.59it/s]
Unloading and merging model: 75%|███████▌ | 391/518 [00:04<00:01, 87.64it/s]
Unloading and merging model: 79%|███████▊ | 407/518 [00:04<00:01, 87.65it/s]
Unloading and merging model: 82%|████████▏ | 423/518 [00:04<00:01, 87.69it/s]
Unloading and merging model: 85%|████████▍ | 439/518 [00:05<00:00, 87.76it/s]
Unloading and merging model: 88%|████████▊ | 455/518 [00:05<00:00, 87.79it/s]
Unloading and merging model: 91%|█████████ | 471/518 [00:05<00:00, 87.74it/s]
Unloading and merging model: 94%|█████████▍| 487/518 [00:05<00:00, 87.77it/s]
Unloading and merging model: 97%|█████████▋| 503/518 [00:05<00:00, 87.75it/s]
Unloading and merging model: 100%|██████████| 518/518 [00:05<00:00, 88.56it/s]
Removed shared tensor {'model.layers.7.self_attn.o_proj.weight_format', 'model.layers.17.self_attn.q_proj.weight_format', 'model.layers.19.self_attn.o_proj.weight_format', 'model.layers.20.mlp.down_proj.weight_format', 'model.layers.21.mlp.down_proj.weight_format', 'model.layers.17.self_attn.v_proj.weight_format', 'model.layers.2.self_attn.v_proj.weight_format', 'model.layers.29.self_attn.o_proj.weight_format', 'model.layers.14.mlp.down_proj.weight_format', 'model.layers.27.mlp.up_proj.weight_format', 'model.layers.0.mlp.gate_proj.weight_format', 'model.layers.27.self_attn.k_proj.weight_format', 'model.layers.12.mlp.up_proj.weight_format', 'model.layers.30.mlp.gate_proj.weight_format', 'model.layers.8.mlp.down_proj.weight_format', 'model.layers.27.self_attn.q_proj.weight_format', 'model.layers.6.mlp.gate_proj.weight_format', 'model.layers.24.mlp.gate_proj.weight_format', 'model.layers.1.self_attn.v_proj.weight_format', 'model.layers.21.mlp.gate_proj.weight_format', 'model.layers.30.mlp.down_proj.weight_format', 'model.layers.15.mlp.up_proj.weight_format', 'model.layers.11.mlp.down_proj.weight_format', 'model.layers.10.mlp.gate_proj.weight_format', 'model.layers.24.self_attn.v_proj.weight_format', 'model.layers.29.mlp.gate_proj.weight_format', 'model.layers.17.mlp.gate_proj.weight_format', 'model.layers.8.self_attn.k_proj.weight_format', 'model.layers.21.self_attn.k_proj.weight_format', 'model.layers.14.self_attn.v_proj.weight_format', 'model.layers.4.self_attn.q_proj.weight_format', 'model.layers.0.mlp.up_proj.weight_format', 'model.layers.12.self_attn.q_proj.weight_format', 'model.layers.26.self_attn.q_proj.weight_format', 'model.layers.15.self_attn.k_proj.weight_format', 'model.layers.2.self_attn.q_proj.weight_format', 'model.layers.15.mlp.down_proj.weight_format', 'model.layers.5.self_attn.k_proj.weight_format', 'model.layers.20.self_attn.o_proj.weight_format', 'model.layers.6.mlp.down_proj.weight_format', 'model.layers.14.self_attn.k_proj.weight_format', 'model.layers.30.self_attn.v_proj.weight_format', 'model.layers.5.mlp.up_proj.weight_format', 'model.layers.22.self_attn.v_proj.weight_format', 'model.layers.28.self_attn.o_proj.weight_format', 'model.layers.9.mlp.gate_proj.weight_format', 'model.layers.18.mlp.down_proj.weight_format', 'model.layers.13.mlp.down_proj.weight_format', 'model.layers.8.self_attn.q_proj.weight_format', 'model.layers.13.self_attn.q_proj.weight_format', 'model.layers.27.mlp.gate_proj.weight_format', 'model.layers.3.self_attn.k_proj.weight_format', 'model.layers.8.mlp.gate_proj.weight_format', 'model.layers.12.mlp.down_proj.weight_format', 'model.layers.16.self_attn.o_proj.weight_format', 'model.layers.28.mlp.down_proj.weight_format', 'model.layers.30.self_attn.k_proj.weight_format', 'model.layers.31.mlp.up_proj.weight_format', 'model.layers.20.mlp.up_proj.weight_format', 'model.layers.16.self_attn.k_proj.weight_format', 'model.layers.30.self_attn.q_proj.weight_format', 'model.layers.11.mlp.up_proj.weight_format', 'model.layers.3.self_attn.o_proj.weight_format', 'model.layers.0.self_attn.v_proj.weight_format', 'model.layers.5.mlp.gate_proj.weight_format', 'model.layers.7.self_attn.v_proj.weight_format', 'model.layers.22.mlp.up_proj.weight_format', 'model.layers.17.self_attn.o_proj.weight_format', 'model.layers.4.self_attn.k_proj.weight_format', 'model.layers.25.self_attn.k_proj.weight_format', 'model.layers.5.self_attn.o_proj.weight_format', 'model.layers.15.self_attn.v_proj.weight_format', 'model.layers.2.mlp.down_proj.weight_format', 'model.layers.19.mlp.up_proj.weight_format', 'model.layers.11.self_attn.q_proj.weight_format', 'model.layers.3.self_attn.v_proj.weight_format', 'model.layers.9.self_attn.k_proj.weight_format', 'model.layers.11.mlp.gate_proj.weight_format', 'model.layers.17.mlp.up_proj.weight_format', 'model.layers.10.self_attn.q_proj.weight_format', 'model.layers.20.self_attn.k_proj.weight_format', 'model.layers.5.mlp.down_proj.weight_format', 'model.layers.23.mlp.gate_proj.weight_format', 'model.layers.23.mlp.down_proj.weight_format', 'model.layers.25.mlp.gate_proj.weight_format', 'model.layers.26.mlp.down_proj.weight_format', 'model.layers.4.mlp.down_proj.weight_format', 'model.layers.14.mlp.gate_proj.weight_format', 'model.layers.27.self_attn.v_proj.weight_format', 'model.layers.1.mlp.gate_proj.weight_format', 'model.layers.17.mlp.down_proj.weight_format', 'model.layers.20.mlp.gate_proj.weight_format', 'model.layers.1.mlp.up_proj.weight_format', 'model.layers.27.self_attn.o_proj.weight_format', 'model.layers.24.mlp.up_proj.weight_format', 'model.layers.10.self_attn.k_proj.weight_format', 'model.layers.18.mlp.gate_proj.weight_format', 'model.layers.13.self_attn.v_proj.weight_format', 'model.layers.18.self_attn.o_proj.weight_format', 'model.layers.15.mlp.gate_proj.weight_format', 'model.layers.16.self_attn.v_proj.weight_format', 'model.layers.23.self_attn.q_proj.weight_format', 'model.layers.12.self_attn.o_proj.weight_format', 'model.layers.23.mlp.up_proj.weight_format', 'model.layers.9.self_attn.q_proj.weight_format', 'model.layers.21.self_attn.q_proj.weight_format', 'model.layers.3.mlp.gate_proj.weight_format', 'model.layers.19.mlp.gate_proj.weight_format', 'model.layers.27.mlp.down_proj.weight_format', 'model.layers.10.mlp.up_proj.weight_format', 'model.layers.21.self_attn.o_proj.weight_format', 'model.layers.5.self_attn.v_proj.weight_format', 'model.layers.28.mlp.gate_proj.weight_format', 'model.layers.18.self_attn.v_proj.weight_format', 'model.layers.20.self_attn.v_proj.weight_format', 'model.layers.22.mlp.gate_proj.weight_format', 'model.layers.10.mlp.down_proj.weight_format', 'model.layers.1.self_attn.o_proj.weight_format', 'model.layers.4.mlp.gate_proj.weight_format', 'model.layers.21.self_attn.v_proj.weight_format', 'model.layers.28.mlp.up_proj.weight_format', 'model.layers.1.self_attn.q_proj.weight_format', 'model.layers.16.mlp.gate_proj.weight_format', 'model.layers.20.self_attn.q_proj.weight_format', 'model.layers.26.mlp.up_proj.weight_format', 'model.layers.8.self_attn.v_proj.weight_format', 'model.layers.30.self_attn.o_proj.weight_format', 'model.layers.30.mlp.up_proj.weight_format', 'model.layers.7.mlp.down_proj.weight_format', 'model.layers.4.self_attn.o_proj.weight_format', 'model.layers.21.mlp.up_proj.weight_format', 'model.layers.1.mlp.down_proj.weight_format', 'model.layers.19.self_attn.k_proj.weight_format', 'model.layers.8.self_attn.o_proj.weight_format', 'model.layers.15.self_attn.q_proj.weight_format', 'model.layers.23.self_attn.o_proj.weight_format', 'model.layers.6.mlp.up_proj.weight_format', 'model.layers.16.mlp.up_proj.weight_format', 'model.layers.19.self_attn.v_proj.weight_format', 'model.layers.29.mlp.down_proj.weight_format', 'model.layers.31.mlp.gate_proj.weight_format', 'model.layers.18.self_attn.q_proj.weight_format', 'model.layers.19.mlp.down_proj.weight_format', 'model.layers.0.self_attn.o_proj.weight_format', 'model.layers.2.mlp.gate_proj.weight_format', 'model.layers.23.self_attn.k_proj.weight_format', 'model.layers.29.self_attn.q_proj.weight_format', 'model.layers.10.self_attn.o_proj.weight_format', 'model.layers.25.self_attn.v_proj.weight_format', 'model.layers.13.self_attn.o_proj.weight_format', 'model.layers.25.self_attn.q_proj.weight_format', 'model.layers.12.mlp.gate_proj.weight_format', 'model.layers.0.mlp.down_proj.weight_format', 'model.layers.12.self_attn.v_proj.weight_format', 'model.layers.7.self_attn.q_proj.weight_format', 'model.layers.24.self_attn.k_proj.weight_format', 'model.layers.1.self_attn.k_proj.weight_format', 'model.layers.24.mlp.down_proj.weight_format', 'model.layers.31.self_attn.q_proj.weight_format', 'model.layers.11.self_attn.o_proj.weight_format', 'model.layers.22.mlp.down_proj.weight_format', 'model.layers.7.self_attn.k_proj.weight_format', 'model.layers.26.self_attn.o_proj.weight_format', 'model.layers.14.self_attn.o_proj.weight_format', 'model.layers.29.self_attn.k_proj.weight_format', 'model.layers.23.self_attn.v_proj.weight_format', 'model.layers.7.mlp.up_proj.weight_format', 'model.layers.2.self_attn.k_proj.weight_format', 'model.layers.3.mlp.up_proj.weight_format', 'model.layers.6.self_attn.k_proj.weight_format', 'model.layers.19.self_attn.q_proj.weight_format', 'model.layers.3.mlp.down_proj.weight_format', 'model.layers.7.mlp.gate_proj.weight_format', 'model.layers.22.self_attn.o_proj.weight_format', 'model.layers.2.self_attn.o_proj.weight_format', 'model.layers.0.self_attn.k_proj.weight_format', 'model.layers.29.self_attn.v_proj.weight_format', 'model.layers.24.self_attn.q_proj.weight_format', 'model.layers.4.mlp.up_proj.weight_format', 'model.layers.4.self_attn.v_proj.weight_format', 'model.layers.13.mlp.up_proj.weight_format', 'model.layers.25.self_attn.o_proj.weight_format', 'model.layers.10.self_attn.v_proj.weight_format', 'model.layers.28.self_attn.q_proj.weight_format', 'model.layers.5.self_attn.q_proj.weight_format', 'model.layers.6.self_attn.q_proj.weight_format', 'model.layers.14.self_attn.q_proj.weight_format', 'model.layers.9.self_attn.o_proj.weight_format', 'model.layers.15.self_attn.o_proj.weight_format', 'model.layers.26.self_attn.v_proj.weight_format', 'model.layers.13.mlp.gate_proj.weight_format', 'model.layers.28.self_attn.k_proj.weight_format', 'model.layers.9.self_attn.v_proj.weight_format', 'model.layers.31.self_attn.k_proj.weight_format', 'model.layers.31.mlp.down_proj.weight_format', 'model.layers.2.mlp.up_proj.weight_format', 'model.layers.9.mlp.down_proj.weight_format', 'model.layers.31.self_attn.v_proj.weight_format', 'model.layers.17.self_attn.k_proj.weight_format', 'model.layers.12.self_attn.k_proj.weight_format', 'model.layers.28.self_attn.v_proj.weight_format', 'model.layers.29.mlp.up_proj.weight_format', 'model.layers.26.mlp.gate_proj.weight_format', 'model.layers.8.mlp.up_proj.weight_format', 'model.layers.25.mlp.down_proj.weight_format', 'model.layers.11.self_attn.v_proj.weight_format', 'model.layers.26.self_attn.k_proj.weight_format', 'model.layers.6.self_attn.o_proj.weight_format', 'model.layers.24.self_attn.o_proj.weight_format', 'model.layers.9.mlp.up_proj.weight_format', 'model.layers.31.self_attn.o_proj.weight_format', 'model.layers.18.mlp.up_proj.weight_format', 'model.layers.16.mlp.down_proj.weight_format', 'model.layers.11.self_attn.k_proj.weight_format', 'model.layers.13.self_attn.k_proj.weight_format', 'model.layers.14.mlp.up_proj.weight_format', 'model.layers.22.self_attn.q_proj.weight_format', 'model.layers.22.self_attn.k_proj.weight_format', 'model.layers.6.self_attn.v_proj.weight_format', 'model.layers.3.self_attn.q_proj.weight_format', 'model.layers.25.mlp.up_proj.weight_format', 'model.layers.18.self_attn.k_proj.weight_format', 'model.layers.16.self_attn.q_proj.weight_format'} while saving. This should be OK, but check by verifying that you don't receive any warning while reloading
╒══════════╕
│ FINISHED │
╘══════════╛
Finetuning process has been completed..
Saving the finetuned base model..
Saving the finetuned base model completed..
```
When I checked the disk size for this saved model it was 7.6MB only...indicating that the merge did not happen appropriately.
Environment:
```
absl-py==2.0.0
accelerate==0.24.1
aiohttp==3.8.6
aiohttp-cors==0.7.0
aiorwlock==1.4.0
aiosignal==1.3.1
anyio==4.2.0
asttokens @ file:///home/conda/feedstock_root/build_artifacts/asttokens_1698341106958/work
async-timeout==4.0.3
attrs==23.1.0
awscli==1.32.25
backports.functools-lru-cache @ file:///home/conda/feedstock_root/build_artifacts/backports.functools_lru_cache_1687772187254/work
beautifulsoup4==4.12.3
bitsandbytes==0.40.2
bleach==6.1.0
blessed==1.20.0
blinker==1.7.0
blis==0.7.11
botocore==1.34.25
Brotli==1.1.0
cachetools==5.3.2
captum==0.7.0
catalogue==2.0.10
certifi==2023.7.22
charset-normalizer==3.3.2
click==8.1.7
cloudpathlib==0.16.0
cloudpickle==3.0.0
colorama==0.4.4
colorful==0.5.6
comm @ file:///home/conda/feedstock_root/build_artifacts/comm_1691044910542/work
commonmark==0.9.1
confection==0.1.3
contourpy==1.2.0
cycler==0.12.1
cymem==2.0.8
Cython==3.0.5
dask==2023.3.2
dataclasses-json==0.6.2
datasets==2.14.6
debugpy @ file:///home/conda/feedstock_root/build_artifacts/debugpy_1695534290310/work
decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1641555617451/work
deepspeed==0.12.3
dill==0.3.7
distlib==0.3.8
docutils==0.16
et-xmlfile==1.1.0
exceptiongroup @ file:///home/conda/feedstock_root/build_artifacts/exceptiongroup_1692026125334/work
executing @ file:///home/conda/feedstock_root/build_artifacts/executing_1698579936712/work
faiss-cpu==1.7.4
fastapi==0.109.0
filelock==3.13.1
Flask==3.0.1
Flask-Compress==1.14
fonttools==4.47.2
frozenlist==1.4.0
fsspec==2023.9.2
future==0.18.3
getdaft==0.1.20
google-api-core==2.15.0
google-auth==2.23.4
google-auth-oauthlib==1.1.0
googleapis-common-protos==1.62.0
gpustat==1.1.1
GPUtil==1.4.0
grpcio==1.59.2
h11==0.14.0
h5py==3.10.0
hiplot==0.1.33
hjson==3.1.0
html5lib==1.1
httpcore==1.0.2
httpx==0.26.0
huggingface-hub==0.20.3
hummingbird-ml==0.4.10
hyperopt==0.2.7
idna==3.4
imagecodecs==2024.1.1
importlib-metadata @ file:///home/conda/feedstock_root/build_artifacts/importlib-metadata_1688754491823/work
ipykernel @ file:///home/conda/feedstock_root/build_artifacts/ipykernel_1698244021190/work
ipython @ file:///home/conda/feedstock_root/build_artifacts/ipython_1698846603011/work
ipywidgets==8.1.1
itsdangerous==2.1.2
jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1696326070614/work
Jinja2==3.1.2
jmespath==1.0.1
joblib==1.3.2
jsonschema==4.6.2
jupyter_client @ file:///home/conda/feedstock_root/build_artifacts/jupyter_client_1699283905679/work
jupyter_core @ file:///home/conda/feedstock_root/build_artifacts/jupyter_core_1698673647019/work
jupyterlab-widgets==3.0.9
kaggle==1.5.16
kiwisolver==1.4.5
langcodes==3.3.0
lightgbm==4.2.0
lightgbm-ray==0.1.9
lightning-utilities==0.9.0
locket==1.0.0
loguru==0.7.2
loralib==0.1.2
ludwig==0.9.3
lxml==4.9.3
Markdown==3.5.1
MarkupSafe==2.1.3
marshmallow==3.20.1
marshmallow-dataclass==8.5.4
marshmallow-jsonschema==0.13.0
matplotlib==3.8.2
matplotlib-inline @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-inline_1660814786464/work
mpi4py==3.1.5
mpmath==1.3.0
msgpack==1.0.7
multidict==6.0.4
multiprocess==0.70.15
murmurhash==1.0.10
mypy-extensions==1.0.0
nest-asyncio @ file:///home/conda/feedstock_root/build_artifacts/nest-asyncio_1697083700168/work
networkx==3.2.1
ninja==1.11.1.1
nltk==3.8.1
numpy==1.26.2
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-ml-py==12.535.133
nvidia-nccl-cu12==2.18.1
nvidia-nvjitlink-cu12==12.3.52
nvidia-nvtx-cu12==12.1.105
oauthlib==3.2.2
onnx==1.15.0
onnxconverter-common==1.13.0
opencensus==0.11.4
opencensus-context==0.1.3
openpyxl==3.1.2
packaging @ file:///home/conda/feedstock_root/build_artifacts/packaging_1696202382185/work
pandas==2.1.3
parso @ file:///home/conda/feedstock_root/build_artifacts/parso_1638334955874/work
partd==1.4.1
peft==0.6.2
pexpect @ file:///home/conda/feedstock_root/build_artifacts/pexpect_1667297516076/work
pickleshare @ file:///home/conda/feedstock_root/build_artifacts/pickleshare_1602536217715/work
Pillow==10.1.0
platformdirs @ file:///home/conda/feedstock_root/build_artifacts/platformdirs_1699715570510/work
preshed==3.0.9
prometheus-client==0.19.0
prompt-toolkit @ file:///home/conda/feedstock_root/build_artifacts/prompt-toolkit_1699963054032/work
protobuf==3.20.3
psutil==5.9.4
ptitprince==0.2.7
ptyprocess @ file:///home/conda/feedstock_root/build_artifacts/ptyprocess_1609419310487/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
pure-eval @ file:///home/conda/feedstock_root/build_artifacts/pure_eval_1642875951954/work
py==1.11.0
py-cpuinfo==9.0.0
py-spy==0.3.14
py4j==0.10.9.7
pyarrow==14.0.1
pyasn1==0.5.0
pyasn1-modules==0.3.0
pydantic==1.10.13
Pygments @ file:///home/conda/feedstock_root/build_artifacts/pygments_1691408637400/work
pynvml==11.5.0
pyparsing==3.1.1
pyrsistent==0.20.0
python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/python-dateutil_1626286286081/work
python-multipart==0.0.6
python-slugify==8.0.1
pytz==2023.3.post1
pyxlsb==1.0.10
PyYAML==6.0
pyzmq @ file:///home/conda/feedstock_root/build_artifacts/pyzmq_1698062401223/work
ray==2.3.1
regex==2023.10.3
requests==2.31.0
requests-oauthlib==1.3.1
retry==0.9.2
rich==12.4.4
rsa==4.7.2
s3fs==0.4.2
s3transfer==0.10.0
sacremoses==0.1.1
safetensors==0.4.2
scikit-learn==1.3.2
scipy==1.11.3
seaborn==0.11.0
sentence-transformers==2.2.2
sentencepiece==0.1.99
six @ file:///home/conda/feedstock_root/build_artifacts/six_1620240208055/work
smart-open==6.4.0
sniffio==1.3.0
soupsieve==2.5
spacy==3.7.2
spacy-legacy==3.0.12
spacy-loggers==1.0.5
srsly==2.4.8
stack-data @ file:///home/conda/feedstock_root/build_artifacts/stack_data_1669632077133/work
starlette==0.35.1
sympy==1.12
tabulate==0.9.0
tblib==3.0.0
tensorboard==2.15.1
tensorboard-data-server==0.7.2
tensorboardX==2.2
text-unidecode==1.3
thinc==8.2.1
threadpoolctl==3.2.0
tifffile==2024.2.12
tokenizers==0.15.2
toolz==0.12.0
torch==2.1.0
torchaudio==2.1.0
torchdata==0.7.0
torchinfo==1.8.0
torchmetrics==1.2.0
torchtext==0.16.0
torchvision==0.16.0
tornado @ file:///home/conda/feedstock_root/build_artifacts/tornado_1695373560918/work
tqdm==4.66.1
traitlets @ file:///home/conda/feedstock_root/build_artifacts/traitlets_1698671135544/work
transformers==4.37.2
triton==2.1.0
typer==0.9.0
typing-inspect==0.9.0
typing_extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1695040754690/work
tzdata==2023.3
urllib3==1.26.18
uvicorn==0.27.0
virtualenv==20.25.0
wasabi==1.1.2
wcwidth @ file:///home/conda/feedstock_root/build_artifacts/wcwidth_1699959196938/work
weasel==0.3.4
webencodings==0.5.1
Werkzeug==3.0.1
widgetsnbextension==4.0.9
wrapt==1.16.0
xgboost==2.0.3
xgboost-ray==0.1.18
xlrd==2.0.1
XlsxWriter==3.1.9
xlwt==1.3.0
xxhash==3.4.1
yarl==1.9.2
zipp @ file:///home/conda/feedstock_root/build_artifacts/zipp_1695255097490/work
```
Can someone help me in solving this
TIA
| 2hard
|
Title: Turn off database query
Body: Hi in another project we are using drf-yasg but in current we have drf-spectacural. We noticed that during shot on endpoint connected to view - SpectacularAPIView.as_view(...) it run queries to database. Is any option to turn off queries to database? I found info like this abotu DjangoFilterBackend that is connected with queries about choices field: "AllValuesFilter: skip choices to prevent DB query" but in version which we are using there is no DjangoFilterExtension class. Any sugestions? | 1medium
|
Title: [BUG-REPORT] bad error message during ordinal encode
Body: Thank you for reaching out and helping us improve Vaex!
Before you submit a new Issue, please read through the [documentation](https://docs.vaex.io/en/latest/). Also, make sure you search through the Open and Closed Issues - your problem may already be discussed or addressed.
**Description**
Please provide a clear and concise description of the problem. This should contain all the steps needed to reproduce the problem. A minimal code example that exposes the problem is very appreciated.
**Software information**
- Vaex version (`import vaex; vaex.__version__)`: vaex-core 4.10.0 vaex-hdf5 0.12.3
- Vaex was installed via: pip / conda-forge / from source pip
- OS: mac/linux
**Additional information**
```
import vaex
df = vaex.datasets.titanic()
values = list(str(df.sex.unique()))
print(values)
df = df.ordinal_encode("sex", values=values + [str(i) for i in range(100)], lazy=True)
df["sex"] = df.sex.fillmissing(-1).astype("int32")
df
```

| 1medium
|
Title: JupyterHub Deployments Using GitOps Tools (FluxCD/ArgoCD)
Body: Hello JupyterHub team,
I've been exploring the current documentation and setup processes for JupyterHub on Kubernetes, primarily managed through Helm. This setup works well for basic deployments, but I've noticed a potential gap for large-scale, enterprise-grade deployments.
Many enterprise data science and engineering teams might prefer integrating JupyterHub with existing GitOps workflows, typically managed via FluxCD or ArgoCD, rather than directly using Helm for every change. This approach leverages their existing CI/CD pipelines and enhances maintainability and scalability.
Given this, I propose expanding the documentation to include detailed guidance on integrating JupyterHub with FluxCD and ArgoCD. This enhancement will:
- Provide step-by-step instructions on setting up JupyterHub using FluxCD/ArgoCD for resource and configuration reconciliation.
- Include practical configurations for a multi-user, highly available JupyterHub environment suitable for enterprise-level deployment, especially those requiring substantial GPU resources.
- Offer comprehensive debugging documentation to assist teams in quickly resolving issues.
I believe these additions will significantly streamline the setup process for large teams and institutions, reducing the overhead associated with integrating JupyterHub into large-scale infrastructure.
I am eager to contribute by drafting the documentation and configuration examples. Before proceeding, I'd like to gather feedback on this idea and any specific requirements or suggestions the community or maintainers might have.
Looking forward to your thoughts and hoping to contribute effectively to this amazing project!
| 1medium
|
Title: Do not fire InsufficientResourceError when there are intentional reasons
Body: <!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/community/blob/master/contributors/devel/pull-requests.md#the-pr-submit-process and developer guide https://github.com/kubernetes/community/blob/master/contributors/devel/development.md#development-guide
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/community/blob/master/contributors/devel/pull-requests.md#best-practices-for-faster-reviews
3. Follow the instructions for writing a release note: https://github.com/kubernetes/community/blob/master/contributors/devel/pull-requests.md#write-release-notes-if-needed
-->
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #45780
**Special notes for your reviewer**:
Return directly of essential predicates failed.
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
NONE
```
| 1medium
|
Title: [APM] Improve API test to avoid rate-agg-like regressions in the future
Body: In https://github.com/elastic/kibana/pull/112343 we removed rate aggs because they have a bug in ES.
We only noticed the bug a few days before the release even though the bug has been in place for months (since metric-based ui was enabled).
We should investigate how we can catch problems like this in the future | 1medium
|
Title: Unexpected result from vbt.Portfolio.from_signals
Body: From In [16] in /tests/notebooks/portfolio.ipynb
```python
portfolio = vbt.Portfolio.from_signals(
price,
entries=[True, False, True, True, True],
exits=[False, True, False, False, True],
size=[1, 2, 3, 4, 5],
size_type=SizeType.Cash, accumulate=False)
print(portfolio.cash)
```
the output is
```python
2020-01-01 99.0
2020-01-02 101.0
2020-01-03 98.0
2020-01-04 98.0
2020-01-05 5.0
dtype: float64
```
Notice that the last row (cash in hand) is 5. It should ideally be 98 since both entry and exit signals are true.
Additionally, when both signals are true, in **signals_order_func_nb**
```python
if size_type == SizeType.Shares:
order_size = abs(size) - run_shares
```
will also give an unexpected result when accumulate is True and there have been multiple buys before this condition.
I think it makes most sense to do nothing when both signals are True. | 1medium
|
Title: Binomial(...).sample has an infinite loop when combined with jax.vmap and p=0
Body: This seems to be the same bug the cropped up in the [Poisson case](https://github.com/pyro-ppl/numpyro/issues/582). And is a result of [combining `lax.cond`, `vmap` and data-dependent inputs](https://github.com/google/jax/issues/2947):
- one of the branches in `_binomial_dispatch` results in an infinite loop when p=0.
- normally that branch is only taken when `n * p > 10`
- but `vmap` causes jax to convert the conditional execution into a select operation.
- so, when `dist.Binomial(...).sample` is wrapped in a `vmap`, both code paths execute and we get an infinite loop.
The solution is to just wrap the branch in an additional condition check. I'll send in a PR doing this soon.
Here's a reproducing example:
```
import jax
import numpyro.distributions as dist
def lets_break_this(p, key):
# One of many ways of making n
n = (jax.random.uniform(key) > 0.5) * 3
_, key = jax.random.split(key)
return dist.Binomial(total_count=n, probs=p).sample(key)
print('this works fine')
jax.jit(lets_break_this)(0.1, jax.random.PRNGKey(0))
jax.jit(jax.vmap(lets_break_this, in_axes=(None, 0)))(
0.1, jax.random.split(jax.random.PRNGKey(0), 1)
)
print('this works fine too!')
jax.jit(lets_break_this)(0, jax.random.PRNGKey(0))
print('jit with vmap and p=0 results in hanging forever')
jax.jit(jax.vmap(lets_break_this, in_axes=(None, 0)))(
0, jax.random.split(jax.random.PRNGKey(0), 1)
)
``` | 1medium
|
Title: n_position in positional encoding
Body: How did you choose n_position as 200. What is this number based on?
Hoping to hear back. Thank you! | 3misc
|
Title: add GitHub Sponsors
Body: Could developers add [GitHub Sponsors](https://github.com/sponsors) option in this repository?
Thanks. | 3misc
|
Title: Zombie excel instances from RunPython [pywin32 > 301 affected]
Body: #### OS: Windows 10
#### Versions: xlwings 0.25, Excel 365, Python 3.9
## Setup
I've build a custom excel ribbon add-in, which calls different subs on click. I am not using the autogenerated functions as I don't have access to the vba model. Thus i've written simple RunPython calls in vba which call the python functions (no sub decorator needed). I am using the optimized connection, i.e. the server does not shut down after a single call.
## Problem
If i have any reference to the xw.app in my sub, i.e. even a simple assignment like
```python
def my_first_sub():
a = xw.book.caller().app
```
or
```python
def my_second_sub():
a = xw.Book()
```
the python process is not cleaned up after closing all Excel windows. Even worse, the gobal COM reference (?) to the app/book prevents a proper shutdown of Excel, which results in an "invisible" zombie process. If excel is started again, all add-ins are missing and overall it is broken until killed by i.e. the task manager.
## Solution (?)
1) One of my use cases is using the status_bar property of app.api or a win32 message box using the app.api.hwnd for i.e. logging. In order to prevent zombie I start another COM client Dispatcher (is that even a term?) inside the function and set the StatusBar property "without" xlwings.
2) My second use case requires usage of xlwings' methods, so a "manual" COM connection is no alternative. I solved this so far by using a separate thread, i.e. something along these lines:
```python
def do_something():
a = xw.Book()
# Do something
def my_second_sub():
t = threading.Thread(target = do_something)
t.daemon = True
t.start()
```
If the thread finishes, proper clean up happens, and excel/python shuts down correctly.
Is there any less ugly way to do use xw.Book() oder the app instance in the sub without creating zombies?
Thanks for any help and/or suggestions! | 2hard
|
Title: 为啥无法直接使用Model.from_pretrained damo/cv_unet_image-matting
Body: 很抱歉,我是一个小白,我想尝试使用Model.from_pretrained 去加载一个模型 cv_unet_image-matting,但是会报以下错误。
KeyError('`cfg` or `default_args` must contain the key "type", but got {\'model_dir\': \'/mnt/workspace/.cache/modelscope/damo/cv_unet_image-matting\'}\nNone')

我不知道应该还要填写什么内容 | 0easy
|
Title: [Bug] training a new model stops with "Decoder stopped with `max_decoder_steps` 10000"
Body: ### Describe the bug
Hello!
I am running TTS 0.15.6 and try to train a new voice basing on 22,000 wav files. The training process seems to work but stops after a while, sometimes after 30 minutes and sometimes after 2 hours. Please see me log here:
```
[...]
--> STEP: 14
| > decoder_loss: 16.613059997558594 (17.25351878574916)
| > postnet_loss: 17.12180519104004 (21.706623349870956)
| > stopnet_loss: 0.6667962074279785 (0.6582207466874804)
| > decoder_coarse_loss: 15.918059349060059 (16.576676845550537)
| > decoder_ddc_loss: 0.0008170512155629694 (0.002927447099604511)
| > ga_loss: 0.0023095402866601944 (0.006352203565516642)
| > decoder_diff_spec_loss: 1.2021437883377075 (1.1472604700497218)
| > postnet_diff_spec_loss: 0.7060739398002625 (0.7125482303755624)
| > decoder_ssim_loss: 0.8515534400939941 (0.8743813676493508)
| > postnet_ssim_loss: 0.7912083864212036 (0.8292492159775325)
| > loss: 13.979524612426758 (15.465778078351702)
| > align_error: 0.9930884041823447 (0.9810643588259284)
warning: audio amplitude out of range, auto clipped.
--> EVAL PERFORMANCE
| > avg_loader_time: 0.00717801707131522 (+0.003781165395464216)
| > avg_decoder_loss: 17.25351878574916 (-1.5459022521972656)
| > avg_postnet_loss: 21.706623349870956 (-0.7187363760811927)
| > avg_stopnet_loss: 0.6582207466874804 (-0.1437307809080396)
| > avg_decoder_coarse_loss: 16.576676845550537 (-1.278356620243617)
| > avg_decoder_ddc_loss: 0.002927447099604511 (+0.00029547616473532155)
| > avg_ga_loss: 0.006352203565516642 (-2.3055106534489687e-05)
| > avg_decoder_diff_spec_loss: 1.1472604700497218 (+0.06803790586335312)
| > avg_postnet_diff_spec_loss: 0.7125482303755624 (+0.019743638379233208)
| > avg_decoder_ssim_loss: 0.8743813676493508 (-0.000993796757289389)
| > avg_postnet_ssim_loss: 0.8292492159775325 (-0.00942203402519226)
| > avg_loss: 15.465778078351702 (-1.0101798602512932)
| > avg_align_error: 0.9810643588259284 (-0.0001967931166291237)
| > Synthesizing test sentences.
> Decoder stopped with `max_decoder_steps` 10000
> Decoder stopped with `max_decoder_steps` 10000
> Decoder stopped with `max_decoder_steps` 10000
> Decoder stopped with `max_decoder_steps` 10000
> Decoder stopped with `max_decoder_steps` 10000
```
In the past this issue was already reported, but the thread was closed but the issue not fixed. Are there any updates?
### To Reproduce
```
import os
import sklearn
from TTS.config.shared_configs import BaseAudioConfig
from trainer import Trainer, TrainerArgs
from TTS.tts.configs.shared_configs import BaseDatasetConfig, CharactersConfig
from TTS.tts.configs.tacotron2_config import Tacotron2Config
from TTS.tts.datasets import load_tts_samples
from TTS.tts.models.tacotron2 import Tacotron2
from TTS.utils.audio import AudioProcessor
from TTS.tts.utils.text.tokenizer import TTSTokenizer
output_path = os.path.dirname(os.path.abspath(__file__))
dataset_config = BaseDatasetConfig(formatter="thorsten", meta_file_train="metadata.csv", path="/home/marc/Desktop/AI/Voice_Cloning3/")
character_config = CharactersConfig(
characters="ABCDEFGHIJKLMNOPQRSTUVWXYZ!',-.:;?abcdefghijklmnopqrstuvwxyzßäéöü̈‒–—‘’“„ ",
punctuations="!'(),-.:;? \u2012\u2013\u2014\u2018\u2019",
pad="_",
eos="~",
bos="^",
phonemes=" a b d e f h i j k l m n o p r s t u v w y z ç ð ø œ ɑ ɒ ɔ ɛ ɡ ɪ ɹ ʃ ʊ ʌ ʏ!!!!!!!?,....:;??!abdefhijklmnoprstuvwxyzçøŋœɐɑɒɔəɛɜɡɪɹɾʃʊʌʏʒː̩̃"
)
audio_config = BaseAudioConfig(
stats_path="/home/marc/Desktop/AI/Voice_Cloning3/stats-thorsten-dec2021-22k.npy",
sample_rate=22050,
do_trim_silence=True,
trim_db=60.0,
signal_norm=False,
mel_fmin=50,
spec_gain=1.0,
log_func="np.log",
ref_level_db=20,
preemphasis=0.0,
)
config = Tacotron2Config( # This is the config that is saved for the future use
audio=audio_config,
batch_size=40, # BS of 40 and max length of 10s will use about 20GB of GPU memory
eval_batch_size=16,
num_loader_workers=4,
num_eval_loader_workers=4,
run_eval=True,
test_delay_epochs=-1,
r=6,
gradual_training=[[0, 6, 64], [10000, 4, 32], [50000, 3, 32], [100000, 2, 32]],
double_decoder_consistency=True,
epochs=1000,
text_cleaner="phoneme_cleaners",
use_phonemes=True,
phoneme_language="de",
phoneme_cache_path=os.path.join(output_path, "phoneme_cache"),
precompute_num_workers=8,
print_step=25,
print_eval=True,
mixed_precision=False,
test_sentences=[
"Es hat mich viel Zeit gekostet ein Stimme zu entwickeln, jetzt wo ich sie habe werde ich nicht mehr schweigen.",
"Sei eine Stimme, kein Echo.",
"Es tut mir Leid David. Das kann ich leider nicht machen.",
"Dieser Kuchen ist großartig. Er ist so lecker und feucht.",
"Vor dem 22. November 1963.",
],
# max audio length of 10 seconds, feel free to increase if you got more than 20GB GPU memory
max_audio_len=22050 * 10,
output_path=output_path,
datasets=[dataset_config],
)
ap = AudioProcessor(**config.audio.to_dict())
ap = AudioProcessor.init_from_config(config)
tokenizer, config = TTSTokenizer.init_from_config(config)
train_samples, eval_samples = load_tts_samples(
dataset_config,
eval_split=True,
eval_split_max_size=config.eval_split_max_size,
eval_split_size=config.eval_split_size,
)
model = Tacotron2(config, ap, tokenizer, speaker_manager=None)
trainer = Trainer(
TrainerArgs(), config, output_path, model=model, train_samples=train_samples, eval_samples=eval_samples
)
trainer.fit()
```
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
TTS 0.15.6
Python 3.9.17
Ubuntu 23.04
cuda/cudnn originally 11.8 but TTS installed into the conda environment
nvidia-cublas-cu11 11.10.3.66
nvidia-cuda-cupti-cu11 11.7.101
nvidia-cuda-nvrtc-cu11 11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11 8.5.0.96
nvidia-cufft-cu11 10.9.0.58
nvidia-curand-cu11 10.2.10.91
nvidia-cusolver-cu11 11.4.0.1
nvidia-cusparse-cu11 11.7.4.91
nvidia-nccl-cu11 2.14.3
nvidia-nvtx-cu11 11.7.91
Hardware AMD 5900X, RTX 4090
```
### Additional context
_No response_ | 2hard
|
Title: Corrected ARCH
Body: **What type of PR is this?**
> Uncomment only one ` /kind <>` line, hit enter to put that in a new line, and remove leading whitespace from that line:
>
> /kind api-change
> /kind bug
> /kind cleanup
> /kind design
> /kind documentation
> /kind failing-test
> /kind feature
> /kind flake
**What this PR does / why we need it**:
Building Docker Image kube-cross.
**Which issue(s) this PR fixes**:
To Fix https://github.com/kubernetes/kubernetes/issues/75114
*Automatically closes linked issue when PR is merged.
Usage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`.
_If PR is about `failing-tests or flakes`, please post the related issues/tests in a comment and do not use `Fixes`_*
-->
Fixes #
**Special notes for your reviewer**:
**Does this PR introduce a user-facing change?**:
NONE
Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string "action required".
| 1medium
|
Title: Post Request Forbidden error, when user is login
Body: without login post method working fine. but when login, I'm getting 403: Forbidden error.
Below is the configuration.
SWAGGER_SETTINGS = {
'LOGIN_URL':'/admin/login',
'LOGOUT_URL': '/admin/logout',
}
urls.py
urlpatterns1=[re_path(r'^', include('api.urls')),]
schema_view = get_schema_view(
openapi.Info(
title="Snippets API",
default_version='v1',
description="Test description",
terms_of_service="https://www.google.com/policies/terms/",
contact=openapi.Contact(email="contact@snippets.local"),
license=openapi.License(name="BSD License"),
),
public=False,
permission_classes=(permissions.AllowAny,),
patterns=urlpatterns1
)
urlpatterns =urlpatterns1+ [
url(r'^swagger(?P<format>\.json|\.yaml)$', login_required(schema_view.without_ui(cache_timeout=None)), name='schema-json'),
url(r'^swagger/$', login_required(schema_view.with_ui('swagger', cache_timeout=None)), name='schema-swagger-ui'),
url(r'^redoc/$', login_required(schema_view.with_ui('redoc', cache_timeout=None)), name='schema-redoc'),
path('admin/', admin.site.urls),
]
views.py
request_category_put = openapi.Schema(type=openapi.TYPE_OBJECT, required=['question_id','choice_id'],
properties={
'question_id': openapi.Schema(type=openapi.TYPE_INTEGER, ), ### <-- I have the ID here.
'choice_id': openapi.Schema(type=openapi.TYPE_INTEGER, )
},
example={
'question_id' : 1,
'choice_id' : 1,
}
)
class SubmitChoice(APIView):
@swagger_auto_schema(
request_body=request_category_put,
responses={
'200': "success",
'400': 'Bad Request',
'404': 'Not found'
},
)
def post(self, request, *args, **kwargs):
..................
return response
| 1medium
|
Title: [BUG] duplicated mapping key on adding response header types for openapi/swagger
Body: I wanted set returned reponse header types for my `login/` API with `openapi_extra` param, but it is not allowing me to do so.
```python
@router.get("login/", response={200: AuthOutSchema, 400: MessageOut}, auth=None, openapi_extra={
"responses": {
"200": {
"headers": {
"Set-Cookie": {
"type": "string",
"description": "Session Id Token",
}
},
"description": "successful operation",
}
}
})
@csrf_exempt
def login_user(request, username: str, password: str):
```
If I change it to `203` the response allows me to use the response header feature.
**Versions (please complete the following information):**
- Python version: 3.11.2
- Django version: 4.1.6
- Django-Ninja version: 0.22.2
- Pydantic version: 1.10.13
| 1medium
|
Title: Formgrade does not create the `root` directory if it doesn't exist
Body: Can someone confirm that a minimal notebook with nbgrader does _not_ create the `CourseDirectory.root` directory if it does not exist
### Operating system
Docker instance: `jupyter/minimal-notebook:dc9744740e12`
### `nbgrader --version`
nbgrader version 0.7.0.dev
(installed with `pip install -e git+https://github.com/jupyter/nbgrader.git#egg=nbgrader `)
### `jupyterhub --version` (if used with JupyterHub)
Not used
### `jupyter notebook --version`
6.1.5
### Expected behavior
UI _new Assignment_ creates the required folders
### Actual behavior
UI _new Assignment_ fails to find the required folder
### Steps to reproduce the behavior
Build a docker image:
```
FROM jupyter/minimal-notebook:dc9744740e12
USER root
RUN mkdir -p /srv/nbgrader/exchange
USER $NB_USER
RUN conda install --quiet --yes 'nbconvert==5.6.1'
USER root
WORKDIR /srv/
RUN pip install -e git+https://github.com/jupyter/nbgrader.git#egg=nbgrader
COPY nbgrader_config.py /etc/jupyter
ENV JUPYTER_CONFIG_DIR /etc/jupyter
# Requred by the notebook 6.0.0 update
RUN fix-permissions $CONDA_DIR \
&& fix-permissions /srv/nbgrader/exchange
RUN jupyter nbextension install --sys-prefix --py nbgrader \
&& jupyter nbextension enable --sys-prefix validate_assignment/main --section=tree \
&& jupyter serverextension enable --sys-prefix nbgrader.server_extensions.validate_assignment \
&& jupyter nbextension enable --sys-prefix assignment_list/main --section=tree \
&& jupyter serverextension enable --sys-prefix nbgrader.server_extensions.assignment_list \
&& jupyter nbextension enable --sys-prefix create_assignment/main \
&& jupyter nbextension enable --sys-prefix formgrader/main --section=tree \
&& jupyter serverextension enable --sys-prefix nbgrader.server_extensions.formgrader
WORKDIR $HOME
USER $NB_USER
```
and in `nbgrader_config.py` we simply have:
```
c.CourseDirectory.course_id = 'My Course'
c.CourseDirectory.root = f"/home/jovyan/My Course"
c.Exchange.path_includes_course = True
```
make the docker image:
docker build -t test .
Run the notebook:
docker run -it --rm -p 8888:8888 test
.... and click on the link to open.
To show the error:
* Click on the `formgrader` tab
* Click on the `Add New Assignment...` link
* Check the logged output:
```
[E 14:38:00.989 NotebookApp] Uncaught exception PUT /formgrader/api/assignment/tree (172.17.0.1)
HTTPServerRequest(protocol='http', host='127.0.0.1:8888', method='PUT', uri='/formgrader/api/assignment/tree', version='HTTP/1.1', remote_ip='172.17.0.1')
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2285, in _wrap_pool_connect
return fn()
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 363, in connect
return _ConnectionFairy._checkout(self)
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 773, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 492, in checkout
rec = pool._do_get()
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/pool/impl.py", line 238, in _do_get
return self._create_connection()
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 308, in _create_connection
return _ConnectionRecord(self)
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 437, in __init__
self.__connect(first_connect_check=True)
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 657, in __connect
pool.logger.debug("Error on connect(): %s", e)
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 69, in __exit__
exc_value, with_traceback=exc_tb,
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 178, in raise_
raise exception
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 652, in __connect
connection = pool._invoke_creator(self)
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/engine/strategies.py", line 114, in connect
return dialect.connect(*cargs, **cparams)
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 488, in connect
return self.dbapi.connect(*cargs, **cparams)
sqlite3.OperationalError: unable to open database file
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/tornado/web.py", line 1701, in _execute
result = method(*self.path_args, **self.path_kwargs)
File "/opt/conda/lib/python3.7/site-packages/tornado/web.py", line 3178, in wrapper
return method(self, *args, **kwargs)
File "/srv/src/nbgrader/nbgrader/server_extensions/formgrader/base.py", line 108, in wrapper
return f(self, *args, **kwargs)
File "/srv/src/nbgrader/nbgrader/server_extensions/formgrader/base.py", line 117, in wrapper
return f(self, *args, **kwargs)
File "/srv/src/nbgrader/nbgrader/server_extensions/formgrader/apihandlers.py", line 145, in put
self.gradebook.update_or_create_assignment(assignment_id, **assignment)
File "/srv/src/nbgrader/nbgrader/server_extensions/formgrader/base.py", line 38, in gradebook
gb = Gradebook(self.db_url, self.coursedir.course_id)
File "/srv/src/nbgrader/nbgrader/api.py", line 1351, in __init__
db_exists = len(self.engine.table_names()) > 0
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2265, in table_names
with self._optional_conn_ctx_manager(connection) as conn:
File "/opt/conda/lib/python3.7/contextlib.py", line 112, in __enter__
return next(self.gen)
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2049, in _optional_conn_ctx_manager
with self._contextual_connect() as conn:
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2251, in _contextual_connect
self._wrap_pool_connect(self.pool.connect, None),
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2289, in _wrap_pool_connect
e, dialect, self
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1555, in _handle_dbapi_exception_noconnection
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 178, in raise_
raise exception
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2285, in _wrap_pool_connect
return fn()
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 363, in connect
return _ConnectionFairy._checkout(self)
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 773, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 492, in checkout
rec = pool._do_get()
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/pool/impl.py", line 238, in _do_get
return self._create_connection()
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 308, in _create_connection
return _ConnectionRecord(self)
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 437, in __init__
self.__connect(first_connect_check=True)
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 657, in __connect
pool.logger.debug("Error on connect(): %s", e)
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 69, in __exit__
exc_value, with_traceback=exc_tb,
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 178, in raise_
raise exception
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 652, in __connect
connection = pool._invoke_creator(self)
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/engine/strategies.py", line 114, in connect
return dialect.connect(*cargs, **cparams)
File "/opt/conda/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 488, in connect
return self.dbapi.connect(*cargs, **cparams)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to open database file
(Background on this error at: http://sqlalche.me/e/e3q8)
[E 14:38:00.999 NotebookApp] {
"Host": "127.0.0.1:8888",
"Connection": "keep-alive",
"Content-Length": "65",
"Accept": "application/json, text/javascript, */*; q=0.01",
"Dnt": "1",
"X-Csrftoken": "2|0c030e9b|3af24406faa6dc29d14c374be4b43d22|1606497265",
"X-Requested-With": "XMLHttpRequest",
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36",
"Content-Type": "application/json",
"Origin": "http://127.0.0.1:8888",
"Sec-Fetch-Site": "same-origin",
"Sec-Fetch-Mode": "cors",
"Sec-Fetch-Dest": "empty",
"Referer": "http://127.0.0.1:8888/formgrader",
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "en-GB,en-US;q=0.9,en;q=0.8",
"Cookie": "_ga=GA1.1.595002085.1567063186; _xsrf=2|0c030e9b|3af24406faa6dc29d14c374be4b43d22|1606497265; username-127-0-0-1-8888=\"2|1:0|10:1607611060|23:username-127-0-0-1-8888|44:ZWQ0MzUzNzhlOTYxNDc3ZDhkMDc0MDQ0YzNjN2MwMTQ=|fb5b9eb46d042d4cf8ec089304e200f45bb2e32396b5043645252f4ac705d592\""
}
[E 14:38:00.999 NotebookApp] 500 PUT /formgrader/api/assignment/tree (172.17.0.1) 23.42ms referer=http://127.0.0.1:8888/formgrader
```
My thought is that the validator for `root` in `coursedir.py` (here: https://github.com/jupyter/nbgrader/blob/master/nbgrader/coursedir.py#L224 ) should really read something like
```
@validate('root')
def _validate_root(self, proposal: Bunch) -> str:
path = os.path.abspath(proposal['value'])
if path != proposal['value']:
self.log.warning("root '%s' is not absolute, standardizing it to '%s", proposal['value'], path)
if not os.path.exists(path):
os.mkdirs(path)
else:
if not os.path.isdir(path)
self.log.warning("root '%s' already exists, but is not a directory", proposal['value'])
return path
``` | 1medium
|
Title: `singleuser.image` is not picked up
Body: <!-- Thank you for contributing. These HTML comments will not render in the issue, but you can delete them once you've read them if you prefer! -->
### Bug description
Custom Image under `singleuser` is not working.
#### Expected behaviour
My `values.yaml` file is as below as I want to use `jupyter/datascience-notebook` to be the base image when use launch. The motivation is that I won't have to install extra common packages such as pandas.
```
singleuser:
storage:
type: dynamic
extraLabels: {}
extraVolumes: []
extraVolumeMounts: []
static:
pvcName:
subPath: "{username}"
capacity: 10Gi
homeMountPath: /home/jovyan
dynamic:
storageClass:
pvcNameTemplate: claim-{username}{servername}
volumeNameTemplate: volume-{username}{servername}
storageAccessModes: [ReadWriteOnce]
image:
name: jupyter/datascience-notebook
tag: "set-by-chartpress"
pullPolicy:
pullSecrets: []
startTimeout: 300
cpu:
limit:
guarantee:
memory:
limit:
guarantee: 1G
extraResource:
limits: {}
guarantees: {}
cmd: null
defaultUrl:
extraPodConfig: {}
```
#### Actual behaviour
When I run `kubectl describe pod jupyter-pebabion` it shows that the notebook container is still `jupyterhub/k8s-singleuser-sample:1.2.0`.
### Your personal set up
<!--
Tell us a little about the system you're using.
Please include information about how you installed,
e.g. are you using a distribution such as zero-to-jupyterhub or the-littlest-jupyterhub.
-->
- OS: I'm using Macbook M1. K8S is running on DigitalOcean.
<!-- [e.g. ubuntu 20.04, macOS 11.0] -->
- Version(s):
<!-- e.g. jupyterhub --version, python --version --->
<details><summary>Full environment</summary>
<!-- For reproduction, it's useful to have the full environment. For example, the output of `pip freeze` or `conda list` --->
```
# paste output of `pip freeze` or `conda list` here
```
</details>
<details><summary>Configuration</summary>
<!--
For JupyterHub, especially include information such as what Spawner and Authenticator are being used.
Be careful not to share any sensitive information.
You can paste jupyterhub_config.py below.
To exclude lots of comments and empty lines from auto-generated jupyterhub_config.py, you can do:
grep -v '\(^#\|^[[:space:]]*$\)' jupyterhub_config.py
-->
```python
# jupyterhub_config.py
```
</details>
<details><summary>Logs</summary>
<!--
Errors are often logged by jupytehub. How you get logs depends on your deployment.
With kubernetes it might be:
kubectl get pod # hub pod name starts with hub...
kubectl logs hub-...
# or for a single-user server
kubectl logs jupyter-username
Or the-littlest-jupyterhub:
journalctl -u jupyterhub
# or for a single-user server
journalctl -u jupyter-username
-->
```
# paste relevant logs here, if any
```
</details>
| 1medium
|
Title: Coverage is not measured with Python 3.7.0
Body: I've noticed that `pytest-cov` doesn't report properly the coverage with Python 3.7.0. Here a simple repro:
### Directory structure
```
|-- foo
| |-- __init__.py
| `-- bar.py
`-- tests
|-- __init__.py
`-- test_baz.py
```
### Files content
All `__init__.py` files are empty.
#### `bar.py`
```
def baz():
print('baz')
```
#### `test_baz.py`
```
from foo import bar
def test_foo():
bar.baz()
```
### `pip freeze` for both Python 3.6.5 and 3.7.0 environments
```
$ pip freeze
atomicwrites==1.1.5
attrs==18.1.0
coverage==4.5.1
more-itertools==4.2.0
pluggy==0.6.0
py==1.5.4
pytest==3.6.3
pytest-cov==2.5.1
six==1.11.0
```
### Test results with Python 3.6.5 (correct result)
```
$ py.test -p no:logging --strict --cov-report=term-missing --cov=foo tests
=========================================== test session starts ============================================
platform darwin -- Python 3.6.5, pytest-3.6.3, py-1.5.4, pluggy-0.6.0
rootdir: /private/tmp/foo, inifile:
plugins: cov-2.5.1
collected 1 item
tests/test_baz.py . [100%]
---------- coverage: platform darwin, python 3.6.5-final-0 -----------
Name Stmts Miss Cover Missing
-----------------------------------------------
foo/__init__.py 0 0 100%
foo/bar.py 2 0 100%
-----------------------------------------------
TOTAL 2 0 100%
========================================= 1 passed in 0.03 seconds =========================================
```
### Test results with Python 3.7.0 (incorrect result)
```
$ py.test -p no:logging --strict --cov-report=term-missing --cov=foo tests
=========================================== test session starts ============================================
platform darwin -- Python 3.7.0, pytest-3.6.3, py-1.5.4, pluggy-0.6.0
rootdir: /private/tmp/foo, inifile:
plugins: cov-2.5.1
collected 1 item
tests/test_baz.py . [100%]
---------- coverage: platform darwin, python 3.7.0-final-0 -----------
Name Stmts Miss Cover Missing
-----------------------------------------------
foo/__init__.py 0 0 100%
foo/bar.py 2 2 0% 1-2
-----------------------------------------------
TOTAL 2 2 0%
========================================= 1 passed in 0.03 seconds =========================================
```
| 1medium
|
Title: [CHARTS] Use `formatSeriesName` consistently when grouping chart data by release
Body: | 1medium
|
Title: Problem with learner_time_limit
Body: Hi Piotr:
Thanks for the changes made in the ner version. I'm now getting an error when setting learner_time_limit. Did you take this option out? I see in your code that you are setting it automatically as a function of total_time_limit. Can this be the behavior only if learner_time_limit is None?
| 1medium
|
Title: Can you set up Gzip Compression on the request?
Body: It would be nice to have Gzip compression on the request if possible | 1medium
|
Title: Small documentation error in AECEnv's step method
Body: Hi there,
The documentation for the step method in the AECEnv base class reads as follows:
```
'''
Receives a dictionary of actions keyed by the agent name.
Returns the observation dictionary, reward dictionary, done dictionary, and info dictionary,
where each dictionary is keyed by the agent.
'''
```
However, from looking at the `api_test.py` script in `pettingzoo/test/` I don't think is accurate, as you should just pass in the action for the current agent and nothing should be returned. Please correct me if I have misunderstood!
| 0easy
|
Title: Dockerfile not building image on Docker version 18.03.1-ce
Body: Please Describe the issue or question and share your OS and Python version.
_________________
**OS**: `ubuntu`
**OS Version**: `16.04`
**Python Version**: `N/A`, using `Docker version 18.03.1-ce, build 9ee9f40`
The Dockerfile fails at building the image with the following error:
```
running build_ext
building 'lxml.etree' extension
creating build/temp.linux-x86_64-2.7
creating build/temp.linux-x86_64-2.7/src
creating build/temp.linux-x86_64-2.7/src/lxml
gcc -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DTHREAD_STACK_SIZE=0x100000 -fPIC -DCYTHON_CLINE_IN_TRACEBACK=0 -Isrc -Isrc/lxml/includes -I/usr/local/include/python2.7 -c src/lxml/etree.c -o build/temp.linux-x86_64-2.7/src/lxml/etree.o -w
In file included from src/lxml/etree.c:660:0:
src/lxml/includes/etree_defs.h:14:31: fatal error: libxml/xmlversion.h: No such file or directory
#include "libxml/xmlversion.h"
^
compilation terminated.
Compile failed: command 'gcc' failed with exit status 1
creating tmp
cc -I/usr/include/libxml2 -c /tmp/xmlXPathInitbVgtAC.c -o tmp/xmlXPathInitbVgtAC.o
/tmp/xmlXPathInitbVgtAC.c:1:26: fatal error: libxml/xpath.h: No such file or directory
#include "libxml/xpath.h"
^
compilation terminated.
*********************************************************************************
Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?
*********************************************************************************
error: command 'gcc' failed with exit status 1
----------------------------------------
Command "/usr/local/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-WQQF7N/lxml/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-is3w6W/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-install-WQQF7N/lxml/
The command '/bin/sh -c pip install --no-cache-dir -r requirements.txt' returned a non-zero code: 1
```
| 1medium
|
Title: Wrong behavior in compressed version
Body: I'm testing `django-jet` as `OpenEdx`'s admin theme and i have the correct behaviour in `LMS` section:

It works perfectly in `Opera - Chromiun` and `Firefox`, and, very important, as you can see, all static files isn't compressed.
But in `Studio / CMS` the history is very different:

The styles are a melty of normal `Django` admin and `Jet` and make very ugly effects:
- Pinned button right padding is wrong.
- Header / Breadcrumbs section with normal styles.
- User button and menu dissapeared.
- Wrong `svg` icons rendering.
And others.
As you can see this site use compressor for compress the files.

So, i proceeded to delete the base admin style and i got a "perfect" behaviour, but i know this is not properly solution.
In `Firefox` the problem is not uglier than `Opera - Chromiun` but exist also:

As you can see, again, title section in menu doesn't appear and the user name button with menu also.

Deleting the same line of code (`admin-base`) the theme works "perfectly".
So, i think as conclusion that for any reason `Jet` styles are conflicting with normal styles with both are compressed.
Why happen this?
Additional information here:

| 2hard
|
Title: Regression: Plotly lost all interactivity on grapher
Body: **Describe the bug**
Once you select a few variables and hit finish, the chart is drawn, but no interactivity is avaiable (no hover, zoom, no menu etc)
This started with this commit to develop:
Merge branch 'develop' of https://github.com/adamerose/pandasgui into develop
805eb056 Adam Rose <adrotog@gmail.com> on 1/29/21 at 3:41 PM
Worked on the last two PR merges (#82 and #84) (just tested those against the same environments).
**Environment**
OS: (Windows 10, Linux Mint 19.3)
Python: (3.8.4, 3.8.5)
IDE: (ipython command line show() or run gui.py from PyCharm)
Anaconda environment with pip
**Package versions**
pip freeze | grep -i "pyqt\|pandasgui\|plotly\|ipython\|jupyter\|notebook"
```
ipython==7.20.0
ipython-genutils==0.2.0
pandasgui @ git+https://github.com/adamerose/PandasGUI.git@63b95aeecf4f41d7ac5309b6a6d31a87aad44e45
plotly==4.14.3
PyQt5==5.15.2
PyQt5-sip==12.8.1
PyQtWebEngine==5.15.2
```
| 1medium
|
Title: Can I run the whole pipeline on a remote server with no GUI?
Body: cannot connect to X server localhost:11.0
| 1medium
|
Title: `ValidationError` not found
Body: I have not used the framework for some months now. I upgraded the package to latest (0.7.10), and started a new project and got the error message. Eve-demo shows the same error.
### Expected Behavior
Should run out-of-the-box.
```python
# using eve-demo
```
### Actual Behavior
Throws error. Maybe it is a mismatch with cerberus version?
```pytb
Traceback (most recent call last):
File "run.py", line 19, in <module>
from eve import Eve
File "/usr/local/lib/python3.5/dist-packages/eve/__init__.py", line 87, in <module>
from eve.flaskapp import Eve # noqa
File "/usr/local/lib/python3.5/dist-packages/eve/flaskapp.py", line 24, in <module>
from eve.endpoints import collections_endpoint, item_endpoint, home_endpoint, \
File "/usr/local/lib/python3.5/dist-packages/eve/endpoints.py", line 18, in <module>
from eve.methods import get, getitem, post, patch, delete, deleteitem, put
File "/usr/local/lib/python3.5/dist-packages/eve/methods/__init__.py", line 15, in <module>
from eve.methods.post import post
File "/usr/local/lib/python3.5/dist-packages/eve/methods/post.py", line 19, in <module>
from eve.validation import ValidationError
File "/usr/local/lib/python3.5/dist-packages/eve/validation.py", line 16, in <module>
from cerberus import ValidationError, SchemaError
ImportError: cannot import name 'ValidationError'
```
### Environment
* Python version: 3.5.3
* Eve version: 0.7.10
| 1medium
|
Title: Allow setting other scales in log_scale
Body: The addition of `log_scale` has been great. Especially for violins. I know you can get the base using a number, but would it possible to allow setting a different scale? For example, I use `symlog` and `logit` quite a bit in my work. | 1medium
|
Title: N-HiTS encounters a RuntimeError when using 'time_varying_unknown_reals' and 'time_varying_known_reals' covariates simultaneously
Body: - PyTorch-Forecasting version: 0.10.2
- PyTorch version: 1.12.0
- Python version: 3.8.3
- Operating System: Linux
### Expected behavior
I executed the example code of N-HiTS (https://pytorch-forecasting.readthedocs.io/en/latest/tutorials/nhits.html).
In order to test the forecasting ability of N-HiTS when using covariates, I add two new columns, named ’log_value' and 'exp_value' respectively, into the example data.
Under normal conditions, N-HiTS should take these covariates as a part of input and output forecast result.
### Actual behavior
However, N-HiTS crashed because of a RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x35 and 24x64).
### Code to reproduce the problem
```
import os
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pytorch_lightning as pl
from pytorch_lightning.callbacks import EarlyStopping
import torch
from pytorch_forecasting import Baseline, NHiTS, TimeSeriesDataSet
from pytorch_forecasting.data import NaNLabelEncoder
from pytorch_forecasting.data.examples import generate_ar_data
from pytorch_forecasting.metrics import SMAPE, QuantileLoss
data = generate_ar_data(seasonality=10.0, timesteps=400, n_series=1, seed=42)
data["static"] = 2
data["date"] = pd.Timestamp("2020-01-01") + pd.to_timedelta(data.time_idx, "D")
data['exp_value'] = np.exp(data['value'])
data['log_value'] = np.log(abs(data['value']) + 1)
data.head()
data['static'] = data['static'].astype(str)
# create dataset and dataloaders
max_encoder_length = 11
max_prediction_length = 10
training_cutoff = data["time_idx"].max() - max_prediction_length
context_length = max_encoder_length
prediction_length = max_prediction_length
training = TimeSeriesDataSet(
data[lambda x: x.time_idx <= training_cutoff],
time_idx="time_idx",
target="value",
# categorical_encoders={"series": NaNLabelEncoder().fit(data.series)},
group_ids=["series"],
# time_varying_known_categoricals=['static'],
# only unknown variable is "value" - and N-Beats can also not take any additional variables
time_varying_unknown_reals=["value", 'exp_value'],
time_varying_known_reals=['log_value'],
max_encoder_length=context_length,
max_prediction_length=prediction_length,
)
validation = TimeSeriesDataSet.from_dataset(training, data, min_prediction_idx=training_cutoff + 1)
batch_size = 128
train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=0)
val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size, num_workers=0)
# calculate baseline absolute error
actuals = torch.cat([y[0] for x, y in iter(val_dataloader)])
baseline_predictions = Baseline().predict(val_dataloader)
SMAPE()(baseline_predictions, actuals)
early_stop_callback = EarlyStopping(monitor="val_loss", min_delta=1e-4, patience=10, verbose=False, mode="min")
trainer = pl.Trainer(
max_epochs=2,
gpus=1,
enable_model_summary=True,
gradient_clip_val=1.0,
callbacks=[early_stop_callback],
limit_train_batches=30,
enable_checkpointing=True,
)
net = NHiTS.from_dataset(
training,
learning_rate=0.09,
log_interval=10,
log_val_interval=1,
weight_decay=1e-2,
backcast_loss_ratio=0.0,
hidden_size=64,
loss=QuantileLoss([0.5,]),
# time_varying_categoricals_encoder=['static'],
# time_varying_categoricals_decoder=['static'],
# time_varying_reals_encoder=['value', 'exp_value', 'log_value'],
# time_varying_reals_decoder=['log_value'],
)
trainer.fit(
net,
train_dataloaders=train_dataloader,
val_dataloaders=val_dataloader,
)
```
### Debug info
```
RuntimeError Traceback (most recent call last)
<ipython-input-1-54cddb48a7c1> in <module>
26 )
27
---> 28 trainer.fit(
29 net,
30 train_dataloaders=train_dataloader,
/usr/local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
768 """
769 self.strategy.model = model
--> 770 self._call_and_handle_interrupt(
771 self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
772 )
/usr/local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in _call_and_handle_interrupt(self, trainer_fn, *args, **kwargs)
721 return self.strategy.launcher.launch(trainer_fn, *args, trainer=self, **kwargs)
722 else:
--> 723 return trainer_fn(*args, **kwargs)
724 # TODO: treat KeyboardInterrupt as BaseException (delete the code below) in v1.7
725 except KeyboardInterrupt as exception:
/usr/local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in _fit_impl(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
809 ckpt_path, model_provided=True, model_connected=self.lightning_module is not None
810 )
--> 811 results = self._run(model, ckpt_path=self.ckpt_path)
812
813 assert self.state.stopped
/usr/local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in _run(self, model, ckpt_path)
1234 self._checkpoint_connector.resume_end()
1235
-> 1236 results = self._run_stage()
1237
1238 log.detail(f"{self.__class__.__name__}: trainer tearing down")
/usr/local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in _run_stage(self)
1321 if self.predicting:
1322 return self._run_predict()
-> 1323 return self._run_train()
1324
1325 def _pre_training_routine(self):
/usr/local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in _run_train(self)
1343
1344 with isolate_rng():
-> 1345 self._run_sanity_check()
1346
1347 # enable train mode
/usr/local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in _run_sanity_check(self)
1411 # run eval step
1412 with torch.no_grad():
-> 1413 val_loop.run()
1414
1415 self._call_callback_hooks("on_sanity_check_end")
/usr/local/lib/python3.8/site-packages/pytorch_lightning/loops/base.py in run(self, *args, **kwargs)
202 try:
203 self.on_advance_start(*args, **kwargs)
--> 204 self.advance(*args, **kwargs)
205 self.on_advance_end()
206 self._restarting = False
/usr/local/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py in advance(self, *args, **kwargs)
153 if self.num_dataloaders > 1:
154 kwargs["dataloader_idx"] = dataloader_idx
--> 155 dl_outputs = self.epoch_loop.run(self._data_fetcher, dl_max_batches, kwargs)
156
157 # store batch level output per dataloader
/usr/local/lib/python3.8/site-packages/pytorch_lightning/loops/base.py in run(self, *args, **kwargs)
202 try:
203 self.on_advance_start(*args, **kwargs)
--> 204 self.advance(*args, **kwargs)
205 self.on_advance_end()
206 self._restarting = False
/usr/local/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py in advance(self, data_fetcher, dl_max_batches, kwargs)
126
127 # lightning module methods
--> 128 output = self._evaluation_step(**kwargs)
129 output = self._evaluation_step_end(output)
130
/usr/local/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py in _evaluation_step(self, **kwargs)
224 output = self.trainer._call_strategy_hook("test_step", *kwargs.values())
225 else:
--> 226 output = self.trainer._call_strategy_hook("validation_step", *kwargs.values())
227
228 return output
/usr/local/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in _call_strategy_hook(self, hook_name, *args, **kwargs)
1763
1764 with self.profiler.profile(f"[Strategy]{self.strategy.__class__.__name__}.{hook_name}"):
-> 1765 output = fn(*args, **kwargs)
1766
1767 # restore current_fx when nested context
/usr/local/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py in validation_step(self, *args, **kwargs)
342 """
343 with self.precision_plugin.val_step_context():
--> 344 return self.model.validation_step(*args, **kwargs)
345
346 def test_step(self, *args, **kwargs) -> Optional[STEP_OUTPUT]:
/usr/local/lib/python3.8/site-packages/pytorch_forecasting/models/base_model.py in validation_step(self, batch, batch_idx)
417 def validation_step(self, batch, batch_idx):
418 x, y = batch
--> 419 log, out = self.step(x, y, batch_idx)
420 log.update(self.create_log(x, y, out, batch_idx))
421 return log
/usr/local/lib/python3.8/site-packages/pytorch_forecasting/models/nhits/__init__.py in step(self, x, y, batch_idx)
352 Take training / validation step.
353 """
--> 354 log, out = super().step(x, y, batch_idx=batch_idx)
355
356 if self.hparams.backcast_loss_ratio > 0: # add loss from backcast
/usr/local/lib/python3.8/site-packages/pytorch_forecasting/models/base_model.py in step(self, x, y, batch_idx, **kwargs)
557 loss = loss * (1 + monotinicity_loss)
558 else:
--> 559 out = self(x, **kwargs)
560
561 # calculate loss
/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
/usr/local/lib/python3.8/site-packages/pytorch_forecasting/models/nhits/__init__.py in forward(self, x)
263
264 # run model
--> 265 forecast, backcast, block_forecasts, block_backcasts = self.model(
266 encoder_y, encoder_mask, encoder_x_t, decoder_x_t, x_s
267 )
/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
/usr/local/lib/python3.8/site-packages/pytorch_forecasting/models/nhits/sub_modules.py in forward(self, encoder_y, encoder_mask, encoder_x_t, decoder_x_t, x_s)
356 # forecast by block
357 for block in self.blocks:
--> 358 block_backcast, block_forecast = block(
359 encoder_y=residuals, encoder_x_t=encoder_x_t, decoder_x_t=decoder_x_t, x_s=x_s
360 )
/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
/usr/local/lib/python3.8/site-packages/pytorch_forecasting/models/nhits/sub_modules.py in forward(self, encoder_y, encoder_x_t, decoder_x_t, x_s)
191
192 # Compute local projection weights and projection
--> 193 theta = self.layers(encoder_y)
194 backcast_theta = theta[:, : self.context_length * len(self.output_size)].reshape(-1, self.context_length)
195 forecast_theta = theta[:, self.context_length * len(self.output_size) :].reshape(-1, self.n_theta)
/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
/usr/local/lib/python3.8/site-packages/torch/nn/modules/container.py in forward(self, input)
137 def forward(self, input):
138 for module in self:
--> 139 input = module(input)
140 return input
141
/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
/usr/local/lib/python3.8/site-packages/torch/nn/modules/linear.py in forward(self, input)
112
113 def forward(self, input: Tensor) -> Tensor:
--> 114 return F.linear(input, self.weight, self.bias)
115
116 def extra_repr(self) -> str:
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x35 and 24x64)
```
Paste the command(s) you ran and the output. Including a link to a colab notebook will speed up issue resolution.
If there was a crash, please include the traceback here.
The code used to initialize the TimeSeriesDataSet and model should be also included.
| 2hard
|
Title: Lambda Warmer
Body: Hello,
I'm using fast api and AWS Lambda, and migrated successfully my projects thanks to your plugin, thanks a lot for that.
I'm now trying to add a warmer so that my lambda is called every 5 min.
I've found this https://pypi.org/project/lambda-warmer-py/ and I'm struggling to add the decorator since to work, we don't have any function to call.
How shoud I do to make use of this lambda warmer ?
I've tried to create a function named handler which would return Mangum(app) if 'warmer' not in events but this doesn't work.
A bit lost here how to do so
thanks | 1medium
|
Title: How To setting the mobile emulation of userAgent
Body: mobile_emulation = {
"deviceMetrics": { "width": 360, "height": 640, "pixelRatio": 3.0 },
"userAgent": "Mozilla/5.0 (Linux; Android 4.2.1; en-us; Nexus 5 Build/JOP40D) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.166 Mobile Safari/535.19" }
chrome_options = Options()
chrome_options.add_experimental_option("mobileEmulation", mobile_emulation)
I have tried the capabilities in json format seems not work, could you have any advice?
Thanks | 1medium
|
Title: There should be the doc page for v2 `AutoAugmentPolicy`.
Body: ### 📚 The doc issue
There are version 1(v1) `AutoAugmentPolicy` and version 2(v2) `AutoAugmentPolicy` as they can be imported from them as shown below:
```python
from torchvision.transforms import AutoAugmentPolicy
from torchvision.transforms.v2 import AutoAugmentPolicy
```
But there is only [the doc page](https://pytorch.org/vision/main/generated/torchvision.transforms.AutoAugmentPolicy.html) for v1 `AutoAugmentPolicy` but not the doc page for v2 `AutoAugmentPolicy`.
### Suggest a potential alternative/fix
So there should be the doc page for v2 `AutoAugmentPolicy`. | 0easy
|
Title: Connection string does not parse hash character properly
Body: * **asyncpg version**: 0.18.3
* **PostgreSQL version**: 11.5
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: Reproduced locally
* **Python version**: 3.7.4
* **Platform**: macOS, Debian
* **Do you use pgbouncer?**: No
* **Did you install asyncpg with pip?**: Yes
* **If you built asyncpg locally, which version of Cython did you use?**: N/A
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: Didn't try yet
Similar issue to #429 . If the password includes a `#` character, you get the following error:
`ValueError: invalid literal for int() with base 10:`
The URI I was using looked like: `postgresql://postgres:abcd@127.0.0.1:5432/db`, and `psql` parses it correctly.
When you remove the `#`, it works fine. | 1medium
|
Title: It seems that #61 and #50 are similar
Body: > 61. Find the nearest value from a given value in an array (★★☆)
> 50. How to find the closest value (to a given scalar) in a vector? (★★☆)
| 0easy
|
Title: Check the correct value of Quantity for non-v1.ResourceCPU resource in GetResourceRequest
Body: **What type of PR is this?**
/kind bug
**What this PR does / why we need it**:
In GetResourceRequest, the condition for v1.ResourceCPU resource and rQuantity.MilliValue comparison is bundled together.
This introduces a bug where for v1.ResourceCPU whose rQuantity.MilliValue is not greater than totalResources, its rQuantity.Value would be compared again.
However the unit for v1.ResourceCPU should be Milli .
This PR fixes the bug.
```release-note
NONE
```
| 1medium
|
Title: E2e tests to check whether cluster autoscaling scale down works when one node pool is not autoscaled
Body: Follow up of #1301.
cc: @piosz @jszczepkowski @fgrzadkowski
| 1medium
|
Title: Metadata for configuration data point
Body: Hey friends,
does Dynaconf support metadata alongside it's managed configuration? How would one go about adding further details about a data point? Examples would be the owner or source of a setting, range and restrictions, or a short description for UI explanation.
For my specific use case (data science lab work) I would prefer metadata alongside the value, however I can see that for many production software this might probably find its way into another related file in parallel.
Is there any support for metadata in the context of Dynaconf or would you guys have a recommendation for a best practice solution? Thanks! | 1medium
|
Title: 模型转换和合并:使用了merge_llama_with_chinese_lora.py合并时报错
Body: *提示:将[ ]中填入x,表示打对钩。提问时删除这行。只保留符合的选项。*
您好,有个问题想咨询一下:
### 详细描述问题
合并模型7b
完成第一步将原始llama转成hf格式
在第二步合并时,命令:
# python merge_llama_with_chinese_lora.py \
# --base_model /work/models/llama/llama-7b-combine/llama-7b-hf \
# --lora_model /work/models/llama/llama-7b-combine/chinese-llama-plus-lora-7b,
# /work/models/llama/llama-7b-combine/chinese-alpaca-plus-lora-7b \
# --output_type huggingface \
# --output_dir /work/models/llama/llama-7b-combine/llama-7b-merge
错误如下:
File ~/miniconda3/envs/chinese-llama/lib/python3.10/site-packages/peft/utils/save_and_load.py:123, in set_peft_model_state_dict(model, peft_model_state_dict, adapter_name)
120 else:
121 raise NotImplementedError
--> 123 model.load_state_dict(peft_model_state_dict, strict=False)
124 if isinstance(config, PromptLearningConfig):
125 model.prompt_encoder[adapter_name].embedding.load_state_dict(
126 {"weight": peft_model_state_dict["prompt_embeddings"]}, strict=True
127 )
File ~/miniconda3/envs/chinese-llama/lib/python3.10/site-packages/torch/nn/modules/module.py:1671, in Module.load_state_dict(self, state_dict, strict)
1666 error_msgs.insert(
1667 0, 'Missing key(s) in state_dict: {}. '.format(
1668 ', '.join('"{}"'.format(k) for k in missing_keys)))
1670 if len(error_msgs) > 0:
-> 1671 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
1672 self.__class__.__name__, "\n\t".join(error_msgs)))
1673 return _IncompatibleKeys(missing_keys, unexpected_keys)
RuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM:
size mismatch for base_model.model.model.embed_tokens.modules_to_save.default.weight: copying a param with shape torch.Size([49954, 4096]) from checkpoint, the shape in current model is torch.Size([49953, 4096]).
size mismatch for base_model.model.lm_head.modules_to_save.default.weight: copying a param with shape torch.Size([49954, 4096]) from checkpoint, the shape in current model is torch.Size([49953, 4096]).
chinese-llama-plus-lora-7b 没问题
chinese-alpaca-plus-lora-7b 报错
模型于huggingface处下载
阅读之前的问题:
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/12
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/281
得知模型的size确实多一位,想问一下这里应该如何处理或参照哪片文章,合并后实际的size是否是[49954, 4096]
*请尽量具体地描述您遇到的问题,**必要时给出运行命令**。这将有助于我们更快速地定位问题所在。*
| 2hard
|
Title: 3.4.0版srl模块有问题
Body:

您好,我编译了ltp-test,然后从百度云下载了3.4.0版的模型文件,通过下面命令行进行测试:
ltp_test.exe --input input.txt --srl-data pisrl.model > output.txt
就出现了上面的错误。(后debug可知异常抛出在DepSRL.cpp -> LoadResource() -> boost::archive::binary_iarchive ia(in); 函数中)
后来又从七牛网下载了您编译的可执行文件包,仍然有同样的问题。
把这些文件放到linux下去执行,不报错,但是output.txt文件为空。
请问,是我的用法不对吗?
| 1medium
|
Title: Shufflenet model
Body: Hi, thanks for the code. Can you please share the link for shufflenet pretrained model. | 3misc
|
Title: Lastest release (2017.1.3) dies when trying to display a mock object
Body: The latest release (2017.1.3) dies with the following stack trace (slightly edited to anonimize it):
```
File "/usr/lib/python2.7/bdb.py", line 49, in trace_dispatch
return self.dispatch_line(frame)
File ".../site-packages/pudb/debugger.py", line 160, in dispatch_line
self.user_line(frame)
File ".../site-packages/pudb/debugger.py", line 381, in user_line
self.interaction(frame)
File ".../site-packages/pudb/debugger.py", line 349, in interaction
show_exc_dialog=show_exc_dialog)
File ".../site-packages/pudb/debugger.py", line 2084, in call_with_ui
return f(*args, **kwargs)
File ".../site-packages/pudb/debugger.py", line 2322, in interaction
self.event_loop()
File ".../site-packages/pudb/debugger.py", line 2280, in event_loop
canvas = toplevel.render(self.size, focus=True)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/widget.py", line 1751, in render
canv = get_delegate(self).render(size, focus=focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/container.py", line 1083, in render
focus and self.focus_part == 'body')
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/decoration.py", line 225, in render
canv = self._original_widget.render(size, focus=focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/container.py", line 2085, in render
focus = focus and self.focus_position == i)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/widget.py", line 1751, in render
canv = get_delegate(self).render(size, focus=focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/container.py", line 1526, in render
canv = w.render((maxcol, rows), focus=focus and item_focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/decoration.py", line 225, in render
canv = self._original_widget.render(size, focus=focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/container.py", line 1526, in render
canv = w.render((maxcol, rows), focus=focus and item_focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/decoration.py", line 225, in render
canv = self._original_widget.render(size, focus=focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/widget.py", line 1751, in render
canv = get_delegate(self).render(size, focus=focus)
File ".../site-packages/urwid/widget.py", line 141, in cached_render
canv = fn(self, size, focus=focus)
File ".../site-packages/urwid/listbox.py", line 457, in render
(maxcol, maxrow), focus=focus)
File ".../site-packages/urwid/listbox.py", line 339, in calculate_visible
self._set_focus_complete( (maxcol, maxrow), focus )
File ".../site-packages/urwid/listbox.py", line 704, in _set_focus_complete
(maxcol,maxrow), focus)
File ".../site-packages/urwid/listbox.py", line 674, in _set_focus_first_selectable
(maxcol, maxrow), focus=focus)
File ".../site-packages/urwid/listbox.py", line 406, in calculate_visible
n_rows = next.rows( (maxcol,) )
File ".../site-packages/urwid/widget.py", line 201, in cached_rows
return fn(self, size, focus)
File ".../site-packages/pudb/var_view.py", line 120, in rows
return len(self._get_text(size))
File ".../site-packages/pudb/var_view.py", line 108, in _get_text
alltext = var_label + ": " + value_str
TypeError: cannot concatenate 'str' and 'Mock' objects
```
From what I can tell pudb tries to stringify a Mock object. I'm using the [mock](https://pypi.python.org/pypi/mock/) library with Python 2.7. The code works with the previous version (2017.1.2). My pudb config is:
```
[pudb]
breakpoints_weight = 1
current_stack_frame = top
custom_shell =
custom_stringifier =
custom_theme =
display = auto
line_numbers = True
prompt_on_quit = True
seen_welcome = e032
shell = internal
sidebar_width = 0.5
stack_weight = 1
stringifier = type
theme = classic
variables_weight = 1
wrap_variables = True
``` | 1medium
|
Title: GNNExplainer inconsistency with the paper?
Body: ### 🚀 The feature, motivation and pitch
Hi!
I have a question about GNNExplainer feature selection part. In the article it's claimed that in order to solve the potential issue with importnat features whose values are zeros 1) Monte Carlo Estimate to sample feature subset and then you use 2) reparametrization are used
The problem is that I actually haven't found those parts in pytorch geometric implementation. In fact, I'm asking this to find out if pytorch geometric could mistakenly show that some feature is unimportant if it was equal to zero:)
Thank you in advance!
### Alternatives
_No response_
### Additional context
_No response_ | 1medium
|
Title: Error during training with glint360k dataset
Body: When training arcface_torch project with glint360k dataset, speed up through DALI. After training for an hour, the following error occurred, why is this?
Error when executing Mixed operator decoders__Image encountered:
Error in thread 7: [/opt/dali/dali/operators/decoder/nvjpeg/nvjpeg_decoder_decoupled_api.h:608] [/opt/dali/dali/image/image_factory.cc:102] Unrecognized image format. Supported formats are: JPEG, PNG, BMP, TIFF, PNM, JPEG2000 and WebP.

data directory:

| 1medium
|
Title: Missing Daily Data - June 11 (data from yesterday) for all tickers
Body: ### Describe bug
June 11 (data from yesterday) data is missing for all tickers. Values are listed as non
### Simple code that reproduces your problem
import yfinance as yf
print(yf.__version__)
def dailyData(tickerList):
data = yf.download(
tickers=tickerList,
period="6mo",
interval="1d",
group_by="ticker",
auto_adjust=False,
prepost=False,
threads=True,
proxy=None,
)
return data
# Define the ticker list with MSFT
ticker_list = ["MSFT"]
# Get the daily data for MSFT
msft_data = dailyData(ticker_list)
# Print the daily data for MSFT
print(msft_data)
### Debug log
0.2.40
[*********************100%%**********************] 1 of 1 completed
Open High Low Close Adj Close Volume
Date
2023-12-13 376.019989 377.640015 370.769989 374.369995 373.006165 30955500
2023-12-14 373.309998 373.760010 364.130005 365.929993 364.596924 43277500
2023-12-15 366.850006 372.399994 366.279999 370.730011 369.379456 78478200
2023-12-18 369.450012 373.000000 368.679993 372.649994 371.292419 21802900
2023-12-19 371.489990 373.260010 369.839996 373.260010 371.900238 20603700
... ... ... ... ... ... ...
2024-06-05 417.809998 424.079987 416.299988 424.010010 424.010010 16988000
2024-06-06 424.010010 425.309998 420.579987 424.519989 424.519989 14861300
2024-06-07 426.200012 426.279999 423.000000 423.850006 423.850006 13621700
2024-06-10 424.700012 428.079987 423.890015 427.869995 427.869995 14003000
2024-06-12 435.320007 443.399994 433.250000 441.059998 441.059998 22221056
### Bad data proof
2024-06-05 417.809998 424.079987 416.299988 424.010010 424.010010 16988000
2024-06-06 424.010010 425.309998 420.579987 424.519989 424.519989 14861300
2024-06-07 426.200012 426.279999 423.000000 423.850006 423.850006 13621700
2024-06-10 424.700012 428.079987 423.890015 427.869995 427.869995 14003000
2024-06-12 435.320007 443.399994 433.250000 441.059998 441.059998 22221056
### `yfinance` version
^0.2.40
### Python version
^3.11
### Operating system
MacOS 14.5 (23F79) | 1medium
|
Title: docs: worker concurrency not documented correctly (Typescript)
Body: Documented is:
const worker = hatchet.worker("my-worker", {
maxRuns: 5,
});
But instead it should be:
const worker = hatchet.worker("my-worker", 5);
The documented code does not work and leads to an error.
https://docs.hatchet.run/home/features/concurrency/overview | 0easy
|
Title: bug: EventBus description is not assigned
Body: ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Create EventBus with Description. Describe the created EventBus - description is NOT assigned.
### Expected Behavior
Create EventBus with Description. Describe the created EventBus - the description is ASSIGNED.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
awslocal events create-event-bus --name bus-with-description --description "Example description"
awslocal events describe-event-bus --name bus-with-description
### Environment
```markdown
- OS: macOS 15.1.1
- LocalStack:
LocalStack version: 4.0.4.dev63
LocalStack Docker image sha: sha256:c121cc3b9bd3d39a918b093b238ce8bad68526a3a4d99b5776cb58e9ccb682ed
LocalStack build date: 2024-12-23
LocalStack build git hash: 96eec2edd
```
### Anything else?
_No response_ | 1medium
|
Title: 基于任务先后顺序训练会造成模型遗忘如何解决?
Body: ### 详细描述问题
10个任务的数据集,如果是分开训练,比如第一个任务训练完,第二个任务基于第一个任务的权重(一般是lora权重)继续训练,这样会造成第一个任务的推理结果很差,如果把10个任务一起训练,之后加进来的数据,会基于这10个任务的权重做训练,会影响前面10个任务的权重,也会造成10个任务的推理结果变差?
你们在训练的时候遇到过这样的问题么?如何解决这样的问题?每次全部任务一起训练,耗时比较长 | 1medium
|
Title: Global Config option to control single column index check_name
Body: **Is your feature request related to a problem? Please describe.**
Hi.
The documentation for `Field(check_name=...)`
https://pandera.readthedocs.io/en/stable/reference/generated/pandera.api.pandas.model_components.Field.html#pandera-api-pandas-model-components-field
states
> Whether to check the name of the column/index during validation. None is the default behavior, which translates to True for columns and multi-index, and to False for a single index.
1. In my project we have many many schemas, and there is a BaseSchemaModel with some default Config options all our schemas inherit.
2. We'd like for the index name to be validated, however adding `check_name=True` to every single schema is a maintenance nightmare, which many would easily forget.
**Describe the solution you'd like**
Quite simple, it would be nice if there was an option `Config.index_check_name=False` which controlled this behavior fo single column indexes. The default `False` represents today's behavior.
A value of `True` would then change how this `if`behaves.
https://github.com/unionai-oss/pandera/blob/850dcf8e59632d54bc9a6df47b9ca08afa089a27/pandera/api/pandas/model.py#L392-L397
| 1medium
|
Title: BUG: dropna throws error with arguments axis=1 and subset not None
Body: ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
df = pd.DataFrame({"name": ['Alfred', 'Batman', 'Catwoman'], "toy": [np.nan, 'Batmobile', 'Bullwhip'], "born": [pd.NaT, pd.NaT,pd.NaT]})
df.dropna(axis=1, subset=['born'])
```
### Issue Description
Traceback (most recent call last):
File "/home/vscode/.local/lib/python3.10/site-packages/pandas/core/frame.py", line 6670, in dropna
raise KeyError(np.array(subset)[check].tolist())
KeyError: ['born']
### Expected Behavior
dropna should return the dataframe without the `born` column.
### Installed Versions
<details>
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.15
python-bits : 64
OS : Linux
OS-release : 6.11.5
Version : #1-NixOS SMP PREEMPT_DYNAMIC Tue Oct 22 13:51:37 UTC 2024
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 23.0.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.3
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.14.1
sqlalchemy : 2.0.36
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| 1medium
|
Title: Services are not accessible through all the masters using NodePort
Body: <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!-->
**What happened**: we have set up a Kubernetes cluster version 1.13 using Kubespray on Baremetal. Currently, there are 3 masters and 3 workers. We have deployed some services with 3 replicas. We are using NodePort to expose services. Now the issue is that some of the services are not available on all the masters using NodePorts, they are available on some of the masters only. My understanding is that if we expose services using NodePort then It should be available through all the masters and workers.
**What you expected to happen**: My understanding is that if we expose services using NodePort then It should be available through all the masters and workers.
**How to reproduce it (as minimally and precisely as possible)**:
**Anything else we need to know?**:
**Environment**:
- Kubernetes version (use `kubectl version`): 1.13.0
- Cloud provider or hardware configuration: RHEL 7.6
- OS (e.g. from /etc/os-release): RHEL 7.6
- Kernel (e.g. `uname -a`): 3.10.0-957.1.3.el7.x86_64
- Install tools:
- Others:
| 1medium
|
Title: Partially initialized module 'pandas'
Body: ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[/Users/priyadcosta/Documents/GitHub/team-process-map/feature_engine/features/BERTopicFinal.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/priyadcosta/Documents/GitHub/team-process-map/feature_engine/features/BERTopicFinal.ipynb) Cell 2 in 1
----> [1](vscode-notebook-cell:/Users/priyadcosta/Documents/GitHub/team-process-map/feature_engine/features/BERTopicFinal.ipynb#W1sZmlsZQ%3D%3D?line=0) from bertopic import BERTopic
[2](vscode-notebook-cell:/Users/priyadcosta/Documents/GitHub/team-process-map/feature_engine/features/BERTopicFinal.ipynb#W1sZmlsZQ%3D%3D?line=1) import ssl
[3](vscode-notebook-cell:/Users/priyadcosta/Documents/GitHub/team-process-map/feature_engine/features/BERTopicFinal.ipynb#W1sZmlsZQ%3D%3D?line=2) import pandas as pd
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/bertopic/__init__.py:1](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/bertopic/__init__.py:1)
----> 1 from bertopic._bertopic import BERTopic
3 __version__ = "0.15.0"
5 __all__ = [
6 "BERTopic",
7 ]
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/bertopic/_bertopic.py:17](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/bertopic/_bertopic.py:17)
15 import collections
16 import numpy as np
---> 17 import pandas as pd
18 import scipy.sparse as sp
20 from tqdm import tqdm
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pandas/__init__.py:138](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pandas/__init__.py:138)
120 from pandas.core.reshape.api import (
121 concat,
122 lreshape,
(...)
134 qcut,
135 )
137 from pandas import api, arrays, errors, io, plotting, tseries
--> 138 from pandas import testing # noqa:PDF015
139 from pandas.util._print_versions import show_versions
141 from pandas.io.api import (
142 # excel
143 ExcelFile,
(...)
171 read_spss,
172 )
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pandas/testing.py:6](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pandas/testing.py:6)
1 """
2 Public testing utility functions.
3 """
----> 6 from pandas._testing import (
7 assert_extension_array_equal,
8 assert_frame_equal,
9 assert_index_equal,
10 assert_series_equal,
11 )
13 __all__ = [
14 "assert_extension_array_equal",
15 "assert_frame_equal",
16 "assert_series_equal",
17 "assert_index_equal",
18 ]
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pandas/_testing/__init__.py:903](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pandas/_testing/__init__.py:903)
898 import pytest
900 return pytest.raises(expected_exception, match=None) # noqa: PDF010
--> 903 cython_table = pd.core.common._cython_table.items()
906 def get_cython_table_params(ndframe, func_names_and_expected):
907 """
908 Combine frame, functions from com._cython_table
909 keys and expected result.
(...)
921 List of three items (DataFrame, function, expected result)
922 """
AttributeError: partially initialized module 'pandas' has no attribute 'core' (most likely due to a circular import) | 2hard
|
Title: The results of running demo.py create too many boxes.
Body: I replaced the network among the codes in the Repo. (resnet101)
Losses are decreasing normally.
The results of running demo.py create too many boxes. Is it too early to judge / now iteration is 6000
<img width="544" alt="스크린샷 2020-03-03 오전 2 41 37" src="https://user-images.githubusercontent.com/48986889/75702278-84ed2400-5cf8-11ea-8cae-4397ee5e814c.png">
| 1medium
|
Title: Palette does not support the use of defaultdict with missing values
Body: Currently, Seaborn does not permit the use of defaultdict with missing values as a palette. A minimal example that reproduces this issue is:
```python
import seaborn as sns
import pandas as pd
from collections import defaultdict
data = pd.DataFrame({
"values": [1, 2, 3],
"hues": ["foo", "bar", "baz"],
})
palette = defaultdict(lambda: "#000000", {
"foo": "#ff0000",
"bar": "#00ff00",
})
sns.histplot(
x="values",
data=data,
hue="hues",
palette=palette,
)
```
My expectation is that this should use the default value of `#000000` for `baz`, which is missing from the palette. Instead, this raises an exception:
```python-traceback
Traceback (most recent call last):
File "/home/ehermes/test/seaborn_defaultdict.py", line 15, in <module>
sns.histplot(
File "/home/ehermes/venvs/seaborn/lib/python3.10/site-packages/seaborn/distributions.py", line 1384, in histplot
p.map_hue(palette=palette, order=hue_order, norm=hue_norm)
File "/home/ehermes/venvs/seaborn/lib/python3.10/site-packages/seaborn/_base.py", line 838, in map_hue
mapping = HueMapping(self, palette, order, norm, saturation)
File "/home/ehermes/venvs/seaborn/lib/python3.10/site-packages/seaborn/_base.py", line 150, in __init__
levels, lookup_table = self.categorical_mapping(
File "/home/ehermes/venvs/seaborn/lib/python3.10/site-packages/seaborn/_base.py", line 234, in categorical_mapping
raise ValueError(err.format(missing))
ValueError: The palette dictionary is missing keys: {'baz'}
```
For this test, I have used `seaborn-0.13.2` and `matplotlib-3.8.2`.
I have a fix for this problem in a personal branch (https://github.com/ehermes/seaborn/tree/palette_defaultdict), but per your contribution guidelines, I have opened a bug report first. With permission, I can also create a PR for my fix. | 1medium
|
Title: Is there a way to use orjson's FFI to provide fast FFI to python via serde?
Body: I have a bunch of serde values and I was curious if it's possible to use orjson as the ffi mechanism instead of writing my own? I've noticed using my own (via pyo3) is slow whereas it looks like orjson does a bunch of work to have efficient ffi. | 1medium
|
Title: DocumentSplitter with NLTK download breaks AWS Lambda deployments
Body: **Describe the bug**
#8755 mentions that `DocumentSplitter` argument `split_by='sentence'` now uses NLTK. When running in AWS Lambda with an updated Haystack (2.8.0 -> 2.9.0) which includes this change, the lambda exits with an error as it cannot download some NLTK files. AWS Lambda deployments have a read-only filesystem, the updated `DocumentSplitter` now triggers download of some NLTK files causing the AWS Lambda to fail.
Is there a way to download these NLTK files ahead of time, i.e., during docker build of AWS Lambda image, so they do not need to be downloaded at runtime?
**Error message**
```txt
[nltk_data] Downloading package punkt_tab to
[nltk_data] /home/sbx_user1051/nltk_data...
[Errno 30] Read-only file system: '/home/sbx_user1051'
```
**Expected behavior**
Updating Haystack from one minor release to another will not result in otherwise unchanged code breaking due to changes in the Haystack API.
**System:**
- OS: AWS Lambda
- Haystack version (commit or version number): 2.9.0
| 1medium
|
Title: hack/test-go shows build errors in tmpdir instead of the source tree
Body: For example:
```
/tmp/go-build179154832/github.com/openshift/origin/pkg/api/kubegraph/analysis/_test/_obj_test/hpa.go:26: invalid type assertion: ...
```
@smarterclayton thoughts on why?
| 1medium
|
Title: [Bug] Qwen2.5-VL-AWQ dose not support concurrent request
Body: ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [x] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
Qwen2.5-VL-AWQ dose not support concurrent request
### Reproduction
shell commond:
```
modelPathOrName=$1
tpSize=$2
dpSize=$3
portService=$4
dockerName=$5
CUDA_VISIBLE_DEVICES=$6
image="lmsysorg/sglang:v0.4.0.post2-cu124"
echo "gpus=$CUDA_VISIBLE_DEVICES"
echo model:$modelPathOrName
echo TP:$tpSize
echo "DP:$dpSize"
[ -z "$modelPathOrName" ] && modelPathOrName="Qwen2.5-VL-72B-Instruct-AWQ"
[ -z "$CUDA_VISIBLE_DEVICES" ] && CUDA_VISIBLE_DEVICES="2,3"
[ ! -n "$tpSize" ] && tpSize=2
[ ! -n "$dpSize" ] && dpSize=1
[ ! -n "$portService" ] && portService=5090
[ ! -n "$dockerName" ] && dockerName="qwen2.5-vl-awq"
echo
echo "finally..."
echo model:$modelPathOrName
echo TP:$tpSize
echo "DP:$dpSize"
dockerArgs=(
--name "$dockerName"
--gpus all
--ipc host
--shm-size 64g
--network host
--privileged
-v /data/model:/home
-w /home
-d
-e CUDA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES
)
serverArgs=(
--host 0.0.0.0
--port $portService
--model-path $modelPathOrName
--tp $tpSize
--dp $dpSize
--load-balance-method round_robin
--mem-fraction-static 0.8
--enable-p2p-check
--trust-remote-code
--api-key sk-xxxxxxxxxxxx
--chat-template qwen2-vl
)
docker run ${dockerArgs[@]} \
$image \
python3 -m sglang.launch_server \
${serverArgs[@]}
```
### Environment
docker: `docker pull lmsysorg/sglang:v0.4.3.post2-cu124`
model: https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct-AWQ | 1medium
|
Title: prune pointpillar with mmdetection3d
Body: I want to prune pointpillar on mmdetection3d framework, but the model has more than one input`def forward(self, img, img_metas, return_loss=True, **kwargs):`, which makes it not suitable for speedup: `ModelSpeedup(model, dummy_input=torch.rand([10, 3, 32, 32]).to(device), masks_file=masks).speedup_model()`
so I just prune and speedup the backbone: `model.backbone` follow steps:
step1 : load origin model
step2: prune model, speedup model, save the speeduped model
step3 : load speeduped model and finetune it
the reason I save model in step2 is I can not finetune it in mmdetection3d immediately if I prune the model in mmdetection3d.
the sparse ratio is 0.9(cut 10% weight with L1Norm), accuracy drops about 2%
**question is, am I right?**
ps: finetune loss

| 1medium
|
Title: Support pandas 1.0
Body: [Pandas 1.0](https://pandas.pydata.org/docs/whatsnew/v1.0.0.html) removed some functionalities previously deprecated, and some features seems to be broken. For example, trying to fit a FAMD gives the following error:
```
~/.pyenv/versions/3.8.2/envs/bioinformatics/lib/python3.8/site-packages/prince/one_hot.py in transform(self, X)
29
30 def transform(self, X):
---> 31 return pd.SparseDataFrame(
32 data=super().transform(X),
33 columns=self.column_names_,
TypeError: SparseDataFrame() takes no arguments
``` | 1medium
|
Title: Issue with project name and k8s namespace
Body: The regex for project name is not the same that is allowed for k8s namespaces
```
Error from server (Invalid): error when creating "deploy/kube/namespace.yml": Namespace "tmp_test" is invalid: metadata.name: Invalid value: "tmp_test": a lowercase RFC 1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?')
``` | 1medium
|
Title: Issue with @api.representation('application/xml')
Body: I was following the example in help doc to create the below decorator.
```python
@api.representation('application/xml')
def xml(data, code, headers=None):
resp = make_response(dicttoxml.dicttoxml(data), code)
resp.headers.extend(headers or {})
return resp
```
And I also set up
```
api = Api(app, default_mediatype='application/json' )
```
The problem is when I launched swagger dev portal and clicked link: http://localhost:5000/swagger.json ,it returned xml format rather than json format, which doesn't make sense.
| 1medium
|
Title: nbviewer is rendering my ipynb in abnormal way.
Body: Hello,
First, thanks for using Jupyter Notebook Viewer (nbviewer).
I think, I found a bug. Please check the below.
nbviewer is rendering my ipynb(`ch9_note.ipynb`) in abnormal way.
Without any description, it will be better to click directly the below link.
https://nbviewer.jupyter.org/github/gritmind/my-review-notes/blob/master/code/book/pymldg/note_ipynb/ch9/ch9_note.ipynb
As you see, the rendering is abnormal. It renders something like in a horizental way.
The below link is just from github with the same ipynb file(`ch9_note.ipynb`).
https://github.com/gritmind/my-review-notes/blob/master/code/book/pymldg/note_ipynb/ch9/ch9_note.ipynb
You can see the rendering is good (but, slow because of not using nbviewer :D).
On this account, the ipynb file(`ch9_note.ipynb`) itself is good.
Therefore, I guess, the error is related to nbviewer itself.
Would you check this bug report? Thanks.
plus...
Interestingly, another file(`ch1_note.ipynb`) is fine. You can see the below link, nbviewer successfully renders it.
https://nbviewer.jupyter.org/github/gritmind/my-review-notes/blob/master/code/book/pymldg/note_ipynb/ch1/ch1_note.ipynb
Is there any difference between `ch1_note.ipynb` and `ch9_note.ipynb` in accordance with nbviewer redering? | 1medium
|
Title: 如何启动GPU
Body: 您好,我刚刚使用了您的这个补帧程序,效果挺赞,但是运行的时候发现负载都是CPU,GPU没运行,是否有什么参数可以设置用GPU来跑呢?我的显卡是4070,win11,感谢 | 1medium
|
Title: [Documentation] Include installation of Playwright
Body: ### What piece of documentation is affected?

in the readme and also the actual documentation it should include the necessary steps to have playwright install.
### What part(s) of the article would you like to see updated?
in the readme and also the actual documentation it should include the necessary steps to have playwright install.
### Additional Information
_No response_
### Acknowledgements
- [X] My issue title is concise, descriptive, and in title casing.
- [X] I have searched the existing issues to make sure this feature has not been requested yet.
- [X] I have provided enough information for the maintainers to understand and evaluate this request. | 0easy
|
Title: Runing `xvfb-run pytest` in headed mode using docker container, still got blocked
Body: 1. nice work! and thank you
2. I was able to run the code on my local desktop, I can confirm it pypass the stealth list, there: https://github.com/Kaliiiiiiiiii-Vinyzu/patchright-python?tab=readme-ov-file#stealth
However, when I am trying to run the same code under docker container.
It seems the agent got blocked again by Kasada.
any suggestions? | 1medium
|
Title: [BUG] Documents aren't being saved to MongoDB when bulk_writer is used with upsert
Body: **Describe the bug**
Documents aren't being saved to MongoDB when bulk_writer is used with upsert.
Using a new DB and collection the following runs without saving anything to the DB.
**To Reproduce**
pip freeze:
```
annotated-types==0.5.0
asyncio==3.4.3
beanie==1.21.0
click==8.1.6
dnspython==2.4.1
lazy-model==0.1.0b0
motor==3.2.0
pydantic==2.1.1
pydantic_core==2.4.0
pymongo==4.4.1
toml==0.10.2
typing_extensions==4.7.1
```
```python
import asyncio
from beanie import Document, Indexed, init_beanie
from beanie.odm.bulk import BulkWriter
from motor.motor_asyncio import AsyncIOMotorClient
from beanie.operators import Set
class User(Document):
name: Indexed(str)
async def example():
# Beanie uses Motor async client under the hood
client = AsyncIOMotorClient("mongodb://localhost:27017/BeanieTest")
# Initialize beanie with the Product document class
await init_beanie(database=client["BeanieBulkWriterUpsetTest"], document_models=[User])
async with BulkWriter() as bulk_writer:
js = User(name="John Smith")
await User.find_one(User.name == "John Smith", bulk_writer=bulk_writer).upsert(
Set({User.name: js.name}), on_insert=js, bulk_writer=bulk_writer
)
if __name__ == "__main__":
asyncio.run(example())
```
**Expected behavior**
If I remove the bulk_writer from `upset(...)` the document is saved to the DB as expected.
```python
import asyncio
from beanie import Document, Indexed, init_beanie
from beanie.odm.bulk import BulkWriter
from motor.motor_asyncio import AsyncIOMotorClient
from beanie.operators import Set
class User(Document):
name: Indexed(str)
async def example():
# Beanie uses Motor async client under the hood
client = AsyncIOMotorClient("mongodb://localhost:27017/BeanieTest")
# Initialize beanie with the Product document class
await init_beanie(database=client["BeanieBulkWriterUpsetTest"], document_models=[User])
async with BulkWriter() as bulk_writer:
js = User(name="John Smith")
await User.find_one(User.name == "John Smith", bulk_writer=bulk_writer).upsert(
Set({User.name: js.name}), on_insert=js
)
if __name__ == "__main__":
asyncio.run(example())
```
**Additional context**
Related to #224
| 2hard
|
Title: --html producing terminal output
Body: I am using
```
python -m pyinstrument --html -o test.html test.py
```
`test.html` contains the terminal formatted output, not the HTML. | 0easy
|
Title: Monitor Page got all the request doubled on each page access.
Body: All requests executed by the monitor page on the first load were always doubled, this make the page load the data twice.
This video is the behavior of the page being refreshed once:
[doublerequests.mov](https://uploads.linear.app/a47946b5-12cd-4b3d-8822-df04c855879f/a25376cb-3771-40f0-864d-1a12a8495bbb/818c8294-723f-46c7-a34f-82d36a9bfe9a?signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwYXRoIjoiL2E0Nzk0NmI1LTEyY2QtNGIzZC04ODIyLWRmMDRjODU1ODc5Zi9hMjUzNzZjYi0zNzcxLTQwZjAtODY0ZC0xYTEyYTg0OTViYmIvODE4YzgyOTQtNzIzZi00NmM3LWEzNGYtODJkMzZhOWJmZTlhIiwiaWF0IjoxNzMxOTEzMDIzLCJleHAiOjMzMzAyNDczMDIzfQ.CPVN5AnJxuYBOxQTPSVe5klIx1a-6g9bC8L7FzpEcK8) | 1medium
|
Title: End day data is missing in the resulting DataFrame
Body: ### Describe bug
Hi.
I see that the last day data is missing in the resulting DataFrame. For example, if I request data starting from 2020-08-28 till 2020-08-31, there will be no 2020-08-31 in the resulting DataFrame but actually this data exists on Yahoo.
### Simple code that reproduces your problem
```
import yfinance as yf
yf.download('SPY', start="2020-08-28", end="2020-08-31")
[*********************100%%**********************] 1 of 1 completed
Open High Low Close Adj Close Volume
Date
2020-08-28 349.440002 350.720001 348.149994 350.579987 331.563324 48588900
yf.download('SPY', start="2020-08-28", end="2020-09-01")
[*********************100%%**********************] 1 of 1 completed
Open High Low Close Adj Close Volume
Date
2020-08-28 349.440002 350.720001 348.149994 350.579987 331.563385 48588900
2020-08-31 350.350006 351.299988 349.059998 349.309998 330.362244 66099200
```
### Debug log
```
yf.download('SPY', start="2020-08-28", end="2020-08-31")
DEBUG Entering download()
DEBUG Disabling multithreading because DEBUG logging enabled
DEBUG Entering history()
DEBUG Entering history()
DEBUG SPY: Yahoo GET parameters: {'period1': '2020-08-28 00:00:00-04:00', 'period2': '2020-08-31 00:00:00-04:00', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'}
DEBUG Entering get()
DEBUG url=https://query2.finance.yahoo.com/v8/finance/chart/SPY
DEBUG params=frozendict.frozendict({'period1': 1598587200, 'period2': 1598846400, 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'})
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG loaded persistent cookie
DEBUG reusing cookie
DEBUG crumb = 'Me.50YI13rf'
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=200
DEBUG Exiting get()
DEBUG SPY: yfinance received OHLC data: 2020-08-28 13:30:00 -> 2020-08-28 13:30:00
DEBUG SPY: OHLC after cleaning: 2020-08-28 09:30:00-04:00 -> 2020-08-28 09:30:00-04:00
DEBUG SPY: OHLC after combining events: 2020-08-28 00:00:00-04:00 -> 2020-08-28 00:00:00-04:00
DEBUG SPY: yfinance returning OHLC: 2020-08-28 00:00:00-04:00 -> 2020-08-28 00:00:00-04:00
DEBUG Exiting history()
DEBUG Exiting history()
DEBUG Exiting download()
```
```
yf.download('SPY', start="2020-08-28", end="2020-09-01")
DEBUG Entering download()
DEBUG Disabling multithreading because DEBUG logging enabled
DEBUG Entering history()
DEBUG Entering history()
DEBUG SPY: Yahoo GET parameters: {'period1': '2020-08-28 00:00:00-04:00', 'period2': '2020-09-01 00:00:00-04:00', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'}
DEBUG Entering get()
DEBUG url=https://query2.finance.yahoo.com/v8/finance/chart/SPY
DEBUG params=frozendict.frozendict({'period1': 1598587200, 'period2': 1598932800, 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'})
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG reusing cookie
DEBUG reusing crumb
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=200
DEBUG Exiting get()
DEBUG SPY: yfinance received OHLC data: 2020-08-28 13:30:00 -> 2020-08-31 13:30:00
DEBUG SPY: OHLC after cleaning: 2020-08-28 09:30:00-04:00 -> 2020-08-31 09:30:00-04:00
DEBUG SPY: OHLC after combining events: 2020-08-28 00:00:00-04:00 -> 2020-08-31 00:00:00-04:00
DEBUG SPY: yfinance returning OHLC: 2020-08-28 00:00:00-04:00 -> 2020-08-31 00:00:00-04:00
DEBUG Exiting history()
DEBUG Exiting history()
DEBUG Exiting download()
```
### Bad data proof
_No response_
### `yfinance` version
0.2.37
### Python version
3.9.10
### Operating system
macOS 14.1.2 | 1medium
|
Title: insightface-faceid extract None
Body: When I used the insightface module to extract faceids from the ffhq and celeba data sets, I found that face IDs could be extracted from 70,000 faces in ffhq, but face IDs could not be extracted (faces result is None: []) from more than 1,000 faces in the celeba data set. What should I do?
Some uploaded faces could not extract faceid. It seemed that some changes could be worked (such as sharpen, crop), but there was no unified method suitable for all images. If you can provide some ideas, thank you very much! | 1medium
|
Title: [ajb/clasp] is STALE
Body: @TonyBagnall,
ajb/clasp has had no activity for 150 days.
This branch will be automatically deleted in 25 days. | 3misc
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.