date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/17
| 1,893 | 6,232 |
<issue_start>username_0: I'm learning **C programming**. I've been given a "Challenge Task" that states that I have to get binary input from user bit by bit *(8 times, since there's 8 bits in a byte*) asking for 1 or 0 *(do not have to validate)* and them.
After collecting 8 bits, I then have to give an output
"Your decimal value for binary number is: " and then the decimal value.
"Your hexadecimal value for binary number is: " and then the hex value.
**I am not looking for answers or code, I am looking for ways to tackle this problem. Can you tell me the right approach in terms of thinking?**
I know that I'll need 8 int variables, bit1,bit2,bit3,bit4,bit5,bit6,bit7 and bit8.
Below is my code so far.
Thanks!
```
#include
#include
int main()
{
int bit1,bit2,bit3,bit4,bit5,bit6,bit7,bit8;
printf("Please enter the first bit(0 or 1): ");
scanf("%d",&bit1);
printf("Please enter the second bi(0 or 1)t: ");
scanf("%d",&bit2);
printf("Please enter the third bit(0 or 1): ");
scanf("%d",&bit3);
printf("Please enter the fourth bit(0 or 1): ");
scanf("%d",&bit4);
printf("Please enter the fifth bit(0 or 1): ");
scanf("%d",&bit5);
printf("Please enter the sixth bit(0 or 1): ");
scanf("%d",&bit6);
printf("Please enter the seventh bit(0 or 1): ");
scanf("%d",&bit7);
printf("Please enter the eigth bit:(0 or 1) ");
scanf("%d",&bit8);
return 0;
}
```<issue_comment>username_1: 2^7 \* (1 OR 0) + 2^6 \* (1 OR 0) + 2^5 \* (1 OR 0) + 2^4 \* (1 OR 0) + 2^3 \* (1 OR 0) + 2^2 \* (1 OR 0) + 2^1 \* (1 OR 0) + 2^0 \* (1 OR 0)
This is how you have to convert binary to decimal
(1 OR 0) is what you enter from input
Something like this if you want inputs in 8 diffrent variables:
```
#include
#include
int main()
{
int bit1,bit2,bit3,bit4,bit5,bit6,bit7,bit8;
int sum =0;
printf("Please enter the first bit(0 or 1): ");
scanf("%d",&bit1);
printf("Please enter the second bi(0 or 1)t: ");
scanf("%d",&bit2);
printf("Please enter the third bit(0 or 1): ");
scanf("%d",&bit3);
printf("Please enter the fourth bit(0 or 1): ");
scanf("%d",&bit4);
printf("Please enter the fifth bit(0 or 1): ");
scanf("%d",&bit5);
printf("Please enter the sixth bit(0 or 1): ");
scanf("%d",&bit6);
printf("Please enter the seventh bit(0 or 1): ");
scanf("%d",&bit7);
printf("Please enter the eigth bit:(0 or 1) ");
scanf("%d",&bit8);
sum = (pow(2,7) \* bit8) + (pow(2,6) \* bit7) + (pow(2,5) \* bit6)
+ (pow(2,4) \* bit5) + (pow(2,3) \* bit4) + (pow(2,2) \* bit3) +
(pow(2,1) \* bit2) + (pow(2,0) \* bit1);
printf("decimal = %d\n",sum);
return 0;
}
```
Upvotes: 0 <issue_comment>username_2: You don't need the 8 variables to store each bit. You can manage with a single variable `inputBit` to receive all the bits.
You could do this (for positive numbers) like
```
unsigned int n = 0
for(int i=0; i<8; ++i)
{
printf("Enter bit %d: ", i);
scanf("%d", &inputBit);
n = n + (inputBit<
```
The resultant decimal number would be in `n` which was initialised to `0`.
`<<` is the bitwise left shift operator.
`inputBit< is added to `n` on each iteration of the loop.`
When `inputBit` is `1`, the value of `inputBit< is same as 2i and when `inputBit` is `0`, `inputBit< is `0`.``
To print `n` in its hexadecimal form, do
```
printf("\nHexadecimal form: %x", n);
```
Upvotes: 1 <issue_comment>username_3: You do not need to have declare `8` separate integers. You can collect your input to `int bit[8]` array. After that you can use array elements in a simple processing loop.
After collection is done you can construct the value of your `8` bit byte by taking under account that every collected bit in `bit[8]` has to go to the different position in your byte.
You can do that by shifting `1` according to the byte position and 'adding' it to the byte.
`<<` is the bitwise left shift operator.
You can use binary `OR` operator `'|'` to implant `1` into the proper byte position.
The binary `'|'` operation with `0` has no effect on the byte.
```
#include
#include
int main(void)
{
char c = 0;
int bit[8];
// Typically bits are counted from LS to MS where LS is bit 0
printf("Please enter the first bit(0 or 1): ");
scanf("%d",&bit[0]);
printf("Please enter the second bi(0 or 1): ");
scanf("%d",&bit[1]);
printf("Please enter the third bit(0 or 1): ");
scanf("%d",&bit[2]);
printf("Please enter the fourth bit(0 or 1): ");
scanf("%d",&bit[3]);
printf("Please enter the fifth bit(0 or 1): ");
scanf("%d",&bit[4]);
printf("Please enter the sixth bit(0 or 1): ");
scanf("%d",&bit[5]);
printf("Please enter the seventh bit(0 or 1): ");
scanf("%d",&bit[6]);
printf("Please enter the eigth bit:(0 or 1) ");
scanf("%d",&bit[7]);
for(int i=0; i<8; i++)
{
c = c | (bit[i] << i);
}
printf("\nYour decimal value for binary number is: %d\n", c);
printf("Your hexadecimal value for binary number is: x%2.2X\n", (unsigned char)c);
return 0;
}
```
Output:
```
Please enter the first bit(0 or 1): 0
Please enter the second bi(0 or 1): 0
Please enter the third bit(0 or 1): 0
Please enter the fourth bit(0 or 1): 0
Please enter the fifth bit(0 or 1): 0
Please enter the sixth bit(0 or 1): 0
Please enter the seventh bit(0 or 1): 0
Please enter the eigth bit:(0 or 1) 1
Your decimal value for binary number is: -128
Your hexadecimal value for binary number is: x80
```
Upvotes: 1 [selected_answer]
|
2018/03/17
| 616 | 2,055 |
<issue_start>username_0: I was wondering if there is a way to map a dictionary to a class. If this is my class:
```
class Class{
var x = 0
var y = "hi"
}
```
And this is my dictionary (`dict`) with type `[String: Any]`
```
["x": 1, "y": "hello"]
```
Is there any easy way to convert the values of the dictionary to my class `Class`?
I now do it like this:
```
classInstance.x = dict["x"] as? Int ?? 0
```
I would like to know if it possible to search in the JSON for a key that matches the name of the variable of the class and if it matches, assign the value of the JSON's key to the value of the variable of the class. In my way (above) I need to type it line by line and maybe there is a one-liner to map the JSON into the class.<issue_comment>username_1: Built-in solution with `JSONSerialization` and `Codable`
```
let dictionary : [String:Any] = ["x": 1, "y": "hello"]
class Class : Codable {
let x : Int
let y : String
}
do {
let jsonData = try JSONSerialization.data(withJSONObject: dictionary)
let instance = try JSONDecoder().decode(Class.self, from: jsonData)
print(instance.x, instance.y)
} catch {
print(error)
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: This is how my function finally looks. It takes in any object that conforms to Decodable and return a subclass from the JSONData :)
```
import UIKit
class DecodeObject{
func decode(data: [String : Any], type: T.Type) -> T? {
do {
let jsonData = try JSONSerialization.data(withJSONObject: data)
return try JSONDecoder().decode(T.self, from: jsonData)
} catch {
return nil
}
}
}
```
Upvotes: 0 <issue_comment>username_3: ```
extension Dictionary where Key == String, Value == Any {
func decode() throws -> T? {
do {
let data = try JSONSerialization.data(withJSONObject: self)
return try JSONDecoder().decode(T.self, from: data)
} catch {
throw error
}
}
}
struct Person: Decodable {
let name: String
let age: Int
}
let person: Person? = try? ["name": "fred", "age": 12].decode()
```
Upvotes: 0
|
2018/03/17
| 828 | 2,304 |
<issue_start>username_0: I just started learning C. I am writing my code as follows:
```
#include
#include
void main()
{
int i, s = 0;
clrscr();
for(i = 1 ; i <= 5 ; i++)
{
if ((i % 3 == 0) || (i % 5 == 0))
{
printf("%d\n", &i);
s = s + i;
}
}
printf("sum is: %d\n", &s);
getch();
}
```
But I am getting trouble in output, which is this:
<issue_comment>username_1: The address operator `&` is unnecessary:
```
printf("%d\n", i);
printf("sum is: %d\n", s);
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Correct your code from:
```
printf("%d\n", &i);
```
to:
```
printf("%d\n", i);
```
You do not need to use the `&` address operator.
Upvotes: 1 <issue_comment>username_3: `printf("%d\n",&i);` It should be `printf("%d\n",i);` and `&s` should be `s` Try below code...
```
#include
#include
void main()
{
int i,s=0;
clrscr();
for(i=1;i<=5;i++){
if((i%3==0)||(i%5==0)){
printf("%d\n",i);
s=s+i;
}
}
printf("sum is:%d\n",s);
getch();
}
```
Upvotes: 0 <issue_comment>username_4: You can read how to use `printf` [here](http://www.cplusplus.com/reference/cstdio/printf/).
In short `printf` does not require address of the variable but value of it - which is
opposite to `scanf`.
Remove the `&` operator to use `printf` like:
```
printf("%d\n",i);
```
and
```
printf("sum is:%d\n",s);
```
See:
```
#include
#include
int main(void)
{
int i,s=0;
clrscr();
for(i=1; i<=5; i++)
{
if((i%3==0)||(i%5==0))
{
printf("%d\n",i);
s=s+i;
}
}
printf("sum is:%d\n",s);
getch();
return 0;
}
```
Output:
```
3
5
sum is:8
```
Other improvement would be to comply with `C` standard and declare `main` as `int main(void)` instead of `void main()` and return a value from the program.
Upvotes: 0 <issue_comment>username_5: ```
#include
main() {
int i;
printf("the multiples of 3 or 5 are :");
for (i = 0; i <= 10; i++) {
if ((i % 3 = 0) || (i % 5 = 0)) print("%d\n", i);
else continue;
}
}
```
Upvotes: 0
|
2018/03/17
| 509 | 1,597 |
<issue_start>username_0: I have a little problem with INNER JOIN in mysql query. I have two tables the first is 'kontrola' and 2nd is 'naruszenia' In 'kontrola' table I have rows:
* id
* podmiot
* miasto
* wszczeto
* zakonczono
* naruszenie\_id (Foreign KEY on 'naruszenia' table)
In my naruszenia table I have:
* id
* naruszenia
Now I want to display using INNER JOIN the naruszenie from 'naruszenia' table.
I've create somethin linke this:
```
$listakontroli = $connecting->query("SELECT * FROM kontrola INNER JOIN naruszenia ON
kontrola.naruszenie_id=naruszenia.id");
```
But the result is that when I want to display records in table I have changed the ID of first table(kontrola) and the naruszenia\_id still showing id from naruszenia table. How to change it to display properly the word not id.<issue_comment>username_1: You could use explicit column name and refer to both the table (in this case using k an n) eg:
```
$listakontroli = $connecting->query("SELECT k.id
, k.podmiot
, k.miasto
, k.wszczeto7
, k.zakonczono
, n.naruszenia
FROM kontrola k
INNER JOIN naruszenia n ON k.naruszenie_id=n.id");
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You need to use `LEFT OUTER JOIN` or separate the `ID` from the two tables. e.g.
```
$listakontroli = $connecting->query("SELECT kontrola.id as kid, naruszenia.id as nid, podmiot, miasto, etc* FROM kontrola INNER JOIN naruszenia ON kontrola.naruszenie_id=naruszenia.id");
```
This way you can properly distinguish the displayed IDs
Upvotes: 1
|
2018/03/17
| 1,744 | 6,811 |
<issue_start>username_0: I'm working on a Django project and to make the forms experience far smoother I want to spread a ModelForm across a few pages. It would be ideal if users who are logged in can save their progress in the form without actually posting the content (in this case, a `JobApplication` where users can come back to filling in info without actually sending off the application).
Currently I have looked at other answers on SO such as [this one](https://stackoverflow.com/questions/14901680/how-to-do-a-multi-step-form-in-django); but this only shows me how to utilise caching to store information between the views where the form is present.
Models.py (models, forms and views have been simplified for readability):
```
class JobApplication(models.Model):
job = models.ForeignKey(JobPost,on_delete=models.SET_NULL,...)
user = models.ForeignKey(AUTH_USER_MODEL,...)
details = models.CharField(max_length=300)
skills = models.CharField(max_length=300)
feedback = models.CharField(max_length=300)
#... [insert more fields] ...
```
Forms.py:
```
class Application(forms.ModelForm):
details = forms.CharField() # To go in page 1 of the form process
skills = forms.CharField() # To go in page 2
feedback = forms.CharField() # To go in page 3
class Meta:
model = JobApplication
fields = ['details','skills','feedback']
```
Views.py:
```
from . import forms
def view1(request):
form = forms.Application()
if request.method == 'POST':
form = forms.Application(data=request.POST)
... some logic here which I am not sure of ...
return render(request, 'view1.html', {})
def view2(request):
form = forms.Application()
if request.method == 'POST':
form = forms.Application(data=request.POST)
...
return render(request, 'view2.html', {})
def view3(request):
form = forms.Application()
if request.method == 'POST':
form = forms.Application(data=request.POST)
...
return render(request, 'view3.html', {})
```
Note that I'm happy to edit my forms or models to achieve this multi-page, save progress effect that you may see on job sites.
Let me know if there's any more code I can add that will be useful, since I'm not too sure what else will be required.
Thanks!<issue_comment>username_1: You will need a Form for each action you need. With it on hands, you can use a feature from Django 1.7 called Form Wizard (Yes, it is built in), the best way achieving this is using Class-Based Views, that is way more flexible, clean and cohesive than FBV in this case.
<https://docs.djangoproject.com/en/1.7/ref/contrib/formtools/form-wizard/#>
Basically you will define a list of steps and forms, both tied to the same URL. You can use customized templates for each form:
<https://docs.djangoproject.com/en/1.7/ref/contrib/formtools/form-wizard/#using-a-different-template-for-each-form>
[EDITED]
As jayt said in comments, the formtools was deprecated since version 1.8 and is now apart from core package and can be found in <https://github.com/django/django-formtools/>
Good luck. =)
Upvotes: 2 <issue_comment>username_2: >
> I'm working on a Django project and to make the forms experience far
> smoother I want to spread a ModelForm across a few pages. It would be
> ideal if users who are logged in can save their progress in the form
> without actually posting the content (in this case, a JobApplication
> where users can come back to filling in info without actually sending
> off the application).
>
>
>
You are mixing UI/UX concepts with technical implementation here.
Your overall process is a job application where you have these states:
1. create a new applicant (user) if not yet done
2. create a new application for that user
3. work on that application until the user says they are done (allowed to span multiple (browser) sessions)
4. mark application as done -> actually apply to the job
How the data for the application is collected (in whatever forms or such) and "submitted" in a web development sense is independent of the actual job application action - it just need to happen beforehand.
So rather than just use one single Django form, you have these possibilities:
(A) Create smaller models that represent a certain content section of the form and that should get their own form. Create ModelForms for those and make them accessible in their own views.
(B) Stick with the single model as you have now but create custom Django forms for each of the separate pages as you have planned. You can still use `ModelForm` and `model=JobApplication` but each form has to specify the set of fields it covers and validate only these fields.
### Progress Tracking
(A) or (B): You could track how much information has been input (in percent, e.g. by simply counting all fields and all non-empty fields and calculating their percentage.
(A): With different django models you could add a last modified timestamp to each model and show if and when this section was edited.
### Main Page
In an overview page you could collect all the data for the user so that they can see what the application will look like (to someone else maybe even) - and there they can also click a button "apply to the job!", or they can open each specific form if they see that any of the data is not yet complete.
### Actual Job Application
Once the user clicks on "apply to the job" a different POST happens that for example sets a datetime field on the JobApplication model. This way you can identify all applications that have reached this process step.
Same with any other steps that happen after that. Maybe the user can create another application by copying an existing one. Or their accepted status is also entered in that system and they can log in and check that. Or the system sends out email notifications or similar. Just add fields for anything that is of interest, then you can also reflect that in the UI if you want to.
Upvotes: 2 <issue_comment>username_3: 1. I had a similar use case in my application what I did was created
multiple forms out the models and a central view controlling the
progress of the form.
The view has a list of forms it has to propagate through
```
GET : /form/ => form/0
POST : Save data to the form
```
2. Initially the form will have no initial data, for index > 0 the
initial data will be the previously saved model object
3. When user clicks on next increment the URL index counter, Decrease
it for prev, Don't save anything on skip
Here is a gist of how it would look.
<https://gist.github.com/bhavaniravi/b784c57ae9d24fce359ae44e2e90b8e3>
>
> I don't know if this is the best optimized method of all times but this is what I did. Any suggestions on improvement is most welcomed
>
>
>
Upvotes: 3
|
2018/03/17
| 1,955 | 7,335 |
<issue_start>username_0: I have a table of invoices being prepared, and then ready for printing.
`[STATUS]` column is Draft, Print, Printing, Printed
I need to get the ID of the first (FIFO) record to be printed, and change the record status. The operation must be threadsafe so that another process does not select the same `InvoiceID`
Can I do this (looks atomic to me, but maybe not ...):
1:
==
```
WITH CTE AS
(
SELECT TOP(1) [InvoiceID], [Status]
FROM INVOICES
WHERE [Status] = 'Print'
ORDER BY [PrintRequestedDate], [InvoiceID]
)
UPDATE CTE
SET [Status] = 'Printing'
, @InvoiceID = [InvoiceID]
```
... perform operations using `@InvoiceID` ...
```
UPDATE INVOICES
SET [Status] = 'Printed'
WHERE [InvoiceID] = @InvoiceID
```
or must I use this (for the first statement)
2:
==
```
UPDATE INVOICES
SET [Status] = 'Printing'
, @InvoiceID = [InvoiceID]
WHERE [InvoiceID] =
(
SELECT TOP 1 [InvoiceID]
FROM INVOICES WITH (UPDLOCK)
WHERE [Status] = 'Print'
ORDER BY [PrintRequestedDate], [InvoiceID]
)
```
... perform operations using `@InvoiceID` ... etc.
(I cannot hold a transaction open from changing status to "Printing" until the end of the process, i.e. when status is finally changed to "Printed").
EDIT:
In case it matters the DB is `READ_COMMITTED_SNAPSHOT`
I **can** hold a transaction for both UPDATE STATUS to "Printing" AND get the ID. But I **cannot** continue to keep transaction open all the way through to changing the status to "Printed". This is an SSRS report, and it makes several different queries to SQL to get various bits of the invoice, and it might crash/whatever, leaving the transaction open.
@<NAME> "If you want a queue" The FIFO sequence is not critical, I would just like invoices that are requested first to be printed first ... "more or less" (don't want any unnecessary complexity ...)
@Martin Smith "looks like a usual table as queue requirement" - yes, exactly that, thanks for the very useful link.
SOLUTION:
The solution I am adopting is from comments:
@lad2025 pointed me to [SQL Server Process Queue Race Condition](https://stackoverflow.com/questions/939831/sql-server-process-queue-race-condition/940001#940001) which uses `WITH (ROWLOCK, READPAST, UPDLOCK)` and @MartinSmith explained what the Isolation issue is and pointed me at [Using tables as Queues](http://rusanu.com/2010/03/26/using-tables-as-queues/) - which talks about exactly what I am trying to do.
I have not grasped why `UPDATE TOP 1` is safe, and `UPDATE MyTable SET xxx = yyy WHERE MyColumn = (SELECT TOP 1 SomeColumn FROM SomeTable ORDER BY AnotherColumn)` (without Isolation Hints) is not, and I ought to educate myself, but I'm happy just to put the isolation hints in my code and get on with something else :)
Thanks for all the help.<issue_comment>username_1: You will need a Form for each action you need. With it on hands, you can use a feature from Django 1.7 called Form Wizard (Yes, it is built in), the best way achieving this is using Class-Based Views, that is way more flexible, clean and cohesive than FBV in this case.
<https://docs.djangoproject.com/en/1.7/ref/contrib/formtools/form-wizard/#>
Basically you will define a list of steps and forms, both tied to the same URL. You can use customized templates for each form:
<https://docs.djangoproject.com/en/1.7/ref/contrib/formtools/form-wizard/#using-a-different-template-for-each-form>
[EDITED]
As jayt said in comments, the formtools was deprecated since version 1.8 and is now apart from core package and can be found in <https://github.com/django/django-formtools/>
Good luck. =)
Upvotes: 2 <issue_comment>username_2: >
> I'm working on a Django project and to make the forms experience far
> smoother I want to spread a ModelForm across a few pages. It would be
> ideal if users who are logged in can save their progress in the form
> without actually posting the content (in this case, a JobApplication
> where users can come back to filling in info without actually sending
> off the application).
>
>
>
You are mixing UI/UX concepts with technical implementation here.
Your overall process is a job application where you have these states:
1. create a new applicant (user) if not yet done
2. create a new application for that user
3. work on that application until the user says they are done (allowed to span multiple (browser) sessions)
4. mark application as done -> actually apply to the job
How the data for the application is collected (in whatever forms or such) and "submitted" in a web development sense is independent of the actual job application action - it just need to happen beforehand.
So rather than just use one single Django form, you have these possibilities:
(A) Create smaller models that represent a certain content section of the form and that should get their own form. Create ModelForms for those and make them accessible in their own views.
(B) Stick with the single model as you have now but create custom Django forms for each of the separate pages as you have planned. You can still use `ModelForm` and `model=JobApplication` but each form has to specify the set of fields it covers and validate only these fields.
### Progress Tracking
(A) or (B): You could track how much information has been input (in percent, e.g. by simply counting all fields and all non-empty fields and calculating their percentage.
(A): With different django models you could add a last modified timestamp to each model and show if and when this section was edited.
### Main Page
In an overview page you could collect all the data for the user so that they can see what the application will look like (to someone else maybe even) - and there they can also click a button "apply to the job!", or they can open each specific form if they see that any of the data is not yet complete.
### Actual Job Application
Once the user clicks on "apply to the job" a different POST happens that for example sets a datetime field on the JobApplication model. This way you can identify all applications that have reached this process step.
Same with any other steps that happen after that. Maybe the user can create another application by copying an existing one. Or their accepted status is also entered in that system and they can log in and check that. Or the system sends out email notifications or similar. Just add fields for anything that is of interest, then you can also reflect that in the UI if you want to.
Upvotes: 2 <issue_comment>username_3: 1. I had a similar use case in my application what I did was created
multiple forms out the models and a central view controlling the
progress of the form.
The view has a list of forms it has to propagate through
```
GET : /form/ => form/0
POST : Save data to the form
```
2. Initially the form will have no initial data, for index > 0 the
initial data will be the previously saved model object
3. When user clicks on next increment the URL index counter, Decrease
it for prev, Don't save anything on skip
Here is a gist of how it would look.
<https://gist.github.com/bhavaniravi/b784c57ae9d24fce359ae44e2e90b8e3>
>
> I don't know if this is the best optimized method of all times but this is what I did. Any suggestions on improvement is most welcomed
>
>
>
Upvotes: 3
|
2018/03/17
| 534 | 2,038 |
<issue_start>username_0: I want to fire a clip on mouseover and it doesn't work. I can log to the console but it seems I am doing something wrong to play the clip … What is it?
```
window.onload = function() {
var clip1 = document.getElementById("clip1");
var projects = document.getElementById("projects");
function playClip1() {
clip1.play();
console.log("I should play clip1 now");
}
projects.addEventListener("mouseover", playClip1);
}
[projects](index.html)
```<issue_comment>username_1: May be your `src` audio file is missing on the path or something is wrong with your audio file loading/playing while hovering. I've tried with another audio file from wikimedia and it is working fine. I've put an extra `console.log()` message to check whether it is showing msg at the time of hover or not.
```js
window.onload = function(){
var clip1 = document.getElementById("clip1");
var projects = document.getElementById("projects");
function playClip1() {
console.log('project link hovered');
clip1.play();
console.log("I should play clip1 now");
}
projects.addEventListener("mouseover", playClip1);
}
```
```html
JS Bin
[projects](index.html)
```
See demo : <https://jsbin.com/dijediduzu/edit?html,js,output>
Upvotes: 0 <issue_comment>username_2: Your code should work, but you may have more information on what's going on by using a promise.
The following code will log a message in both cases: success or error. The error message may be useful to you.
```js
window.onload = function() {
var clip1 = document.getElementById("clip1");
var projects = document.getElementById("projects");
function playClip1() {
clip1.play()
.then(function() {
console.log('success');
})
.catch(function(error) {
console.log('error', error);
});
console.log("I should play clip1 now");
}
projects.addEventListener("mouseover", playClip1);
}
```
```html
[projects](index.html)
```
Upvotes: 2
|
2018/03/17
| 865 | 2,807 |
<issue_start>username_0: I have a cryptographic program in which I need to represent each character by its hex value and return all of these hex values as a char string. After multiple crypto functions I got a final decimal value of each `char`. Now at the end of a for cycle I need to put all of them in a string containing each character hex value separated by space. In short, I have decimal number, which I need to convert to hex and put them in a string for return. Thanks for responses. I tried `sprintf()` but I obviously can't put it into char string because it can only hold one char per index. I am a university student and its my first year working with C so please keep that in mind. I hope it is not such noob question.
I tried something.
at the top before loop starts I wrote this:
```
char*res=calloc(size,sizeof(unsigned char));
```
and before the end of loop i wrote this:
```
sprintf(&res[i],"%02x",dec);
```
dec - decimal value of a character. if i print it out i get the right output.
now my output should be `"80 9c 95 95 96 11 bc 96 b9 95 9d 10"`
but instead I got "`899991b9b9910`".
it means i got only 1st character of each hex value except the last one. im sure now you might get a solution for me. THANK YOU!<issue_comment>username_1: ```
char * getString(unsigned char *s,int len)
{
char *res=calloc(sizeof(unsigned char),len*3+1);
for(int i=0;i
```
Will this help?
Usage:
```
unsigned char [] data= {'H' ,'E' , 'L', 'l', 'o'};
char* str=getString(data,sizeof(data)/sizeof(unsigned char));
printf("%s",str);
free(str);
```
Or:
```
int main()
{
unsigned char input[20] ={0,};
scanf("%19s",input);
encrypt(input,strlen(input));
char* str=getString(data,sizeof(data)/sizeof(unsigned char));
printf("%s",str);
free(str);
return 0;
}
```
And the `void encrypt(unsigned char *input, int len)` modifies the input array.
I think converting the data at once is easier to implement.
Also you need to `#include`
```
, .
```
*this code may add additional space at end of the line of string generated...*
And [this post can answer you](https://stackoverflow.com/a/12839870/8614565) or your question is *duplicate* if I got your point.
And seeing that post, I'll have to practice C skills a lot.. I'm a student either so have more time to learn..
**edit**
Or To fit your need(?):
```
char * getString(unsigned char *s,int len)
{
char *res=calloc(sizeof(unsigned char),len*3+1);
for(int i=0;i
```
Upvotes: 0 <issue_comment>username_2: I figured it out:
```
char*res = calloc(size,sizeof(unsigned char)*3);
sprintf(&res[f*3],"%x ", dec);
```
When I return and print `res`, it prints out as I wanted. Please, if you can, modify this answer for others to understand. Thank you all for help!
Upvotes: 2 [selected_answer]
|
2018/03/17
| 701 | 2,273 |
<issue_start>username_0: I have a question on a homework assignment where I need to list the quiz number and date of those quizzes that have a low score less than the average low score.
To do this I need to use a GROUP BY clause and a HAVING clause. It should look something like this:
```
select quiznum, quizdate
from quizzes
group by quiznum, quizdate, lowscore
having lowscore < avg(lowscore);
```
This however returns no data found.
If I put the average score in manually (which is approximately 14) it retruns the information I need, but in regards to the question I do not know the average lowscore.
How do I list the quiz number and date in which the low score is less than the average low score?<issue_comment>username_1: ```
char * getString(unsigned char *s,int len)
{
char *res=calloc(sizeof(unsigned char),len*3+1);
for(int i=0;i
```
Will this help?
Usage:
```
unsigned char [] data= {'H' ,'E' , 'L', 'l', 'o'};
char* str=getString(data,sizeof(data)/sizeof(unsigned char));
printf("%s",str);
free(str);
```
Or:
```
int main()
{
unsigned char input[20] ={0,};
scanf("%19s",input);
encrypt(input,strlen(input));
char* str=getString(data,sizeof(data)/sizeof(unsigned char));
printf("%s",str);
free(str);
return 0;
}
```
And the `void encrypt(unsigned char *input, int len)` modifies the input array.
I think converting the data at once is easier to implement.
Also you need to `#include`
```
, .
```
*this code may add additional space at end of the line of string generated...*
And [this post can answer you](https://stackoverflow.com/a/12839870/8614565) or your question is *duplicate* if I got your point.
And seeing that post, I'll have to practice C skills a lot.. I'm a student either so have more time to learn..
**edit**
Or To fit your need(?):
```
char * getString(unsigned char *s,int len)
{
char *res=calloc(sizeof(unsigned char),len*3+1);
for(int i=0;i
```
Upvotes: 0 <issue_comment>username_2: I figured it out:
```
char*res = calloc(size,sizeof(unsigned char)*3);
sprintf(&res[f*3],"%x ", dec);
```
When I return and print `res`, it prints out as I wanted. Please, if you can, modify this answer for others to understand. Thank you all for help!
Upvotes: 2 [selected_answer]
|
2018/03/17
| 372 | 1,573 |
<issue_start>username_0: The languages table has: id, shortcode
The comments table has id, user\_id, comment, language\_id (foreign key)
In the Comment Model I defined Language as a hasOne Relationship
In the Language Model I defined Comments as a hasMany Relationship (this is wrong?).
In Tinker I get this error when I try to execute: $comment->language()->get():
```
Column not found: 1054 Unknown column 'languages.comment_id' in 'where clause' (SQL: select * from `languages` where `languages`.`comment_id` = 7 and `languages`.`comment_id` is not null)'
```
Why is Laravel searching in the languages table for the comment\_id? It seems I misunderstand something completely.
What is the right way to do get the language shortcode? I thought $comment->language()->shortcode should work.
And what is the most efficient way to preload all the language\_id and shortcode information without performing duplicate queries since this is used a lot?<issue_comment>username_1: Laravel makes assumptions on your key-names in your relationship. Because you use the hasOne relationship, Laravel expects the key to be in the languages table, named comment\_id.
Your Comment model should use a belongsTo relationship to Language and drop the hasOne. This way laravel makes the assumption that the foreign-key is actually in the comments table, named language\_id, which is the case :).
Upvotes: 2 <issue_comment>username_2: Using the belongsTo relationship solved the problem.
Afterwards I could access it with $comment->language->shortcode;
thanks everyone!
Upvotes: 1
|
2018/03/17
| 981 | 3,610 |
<issue_start>username_0: I have the following method I created it's nothing fancy just retrieves data from an HTTP server but it is an async method.
```
public async Task GetStringFromConsul(string key)
{
string s = "";
// attempts to get a string from Consul
try
{
//async method to get the response
HttpResponseMessage response = await this.http.GetAsync(apiPrefix + key);
//if it responds successfully
if (response.IsSuccessStatusCode)
{
//parse out a string and decode said string
s = await response.Content.ReadAsStringAsync();
var obj = JsonConvert.DeserializeObject>(s);
s = Encoding.UTF8.GetString(Convert.FromBase64String(obj[0].value));
}
else
{
s = requestErrorCodePrefix + response.StatusCode + ">";
}
}
catch(Exception e)
{
//need to do something with the exception
s = requestExceptionPrefix + e.ToString() + ">";
}
return s;
}
```
Then in the test I call the code just like I do during normal execution:
```
[Test]
public async Task GetStringFromConsulTest()
{
ConsulConfiguration cc = new ConsulConfiguration();
string a = cc.GetStringFromConsul("").GetAwaiter().GetResult();
Assert.AreEqual(a, "");
}
```
However I get an exception like so instead of any sort of string:
```
Message: Expected string length 514 but was 0. Strings differ at index 0.
Expected: "
```
I've looked around and found a few tutorials on this and tried it but to no avail. If anyone can point me in the right direction I would appreciate it, I'm pretty new to C# unit testing.<issue_comment>username_1: Maybe `this.http.GetAsync(apiPrefix + key);` is timing out. That would give you a TaskCanceledException. Not sure what your value of apiPrefix is.
Upvotes: 0 <issue_comment>username_2: In Nunit Framework, Use async/await in unit test as in the following:
```
[Test]
public async Task GetStringFromConsulTest()
{
ConsulConfiguration cc = new ConsulConfiguration();
//string a = cc.GetStringFromConsul("").GetAwaiter().GetResult();
//use await instead
string a = await cc.GetStringFromConsul("");
Assert.AreEqual(a, "");
}
```
For more details, read [Async Support in NUnit](http://simoneb.github.io/blog/2013/01/19/async-support-in-nunit/)
It's better to test your method in case of firing exceptions [NUnit expected exceptions](https://stackoverflow.com/questions/3407765/nunit-expected-exceptions)
**Update:**
The comment:
>
> I still get the error even when structuring the method like this.
>
>
>
That error means that the test fail and there is a bug in the source code method `GetStringFromConsul`.
Your test method include the Assert statement:
```
Assert.AreEqual(a, "");
```
That means that you expect `a` variable which is calculated from `a=cc.GetStringFromConsul("")` should be "" to pass,
otherwise the test fail and NUnit Framework Fire an exception like:
```
Message: Expected string length 514 but was 0. Strings differ at index 0.
Expected: "
```
To resolve this exception, you should resolve the bug in the method `GetStringFromConsul` to return "" when the input parameter=""
Upvotes: 2 <issue_comment>username_3: I'm a stickler for good error messages so I'd first change the assert to
```
Assert.AreEqual("", a);
```
because the **first** argument is your expected value. Now it will fail with
```
Message: Expected string length 0 but was 514. Strings differ at index 0.
Expected:
But was: "
```
...still a failure, but a much more sensible message.
Next, to pass, add an await to your async method call, as suggested by <NAME>.
Upvotes: 2
|
2018/03/17
| 894 | 2,979 |
<issue_start>username_0: I am seeing some strange behavior with BeautifulSoup as demonstrated in the example below.
```
import re
from bs4 import BeautifulSoup
html = """This has a **color** of red. Because it likes the color red
This paragraph has a color of blue.
This paragraph does not have a color.
"""
soup = BeautifulSoup(html, 'html.parser')
pattern = re.compile('color', flags=re.UNICODE+re.IGNORECASE)
paras = soup.find_all('p', string=pattern)
print(len(paras)) # expected to find 3 paragraphs with word "color" in it
2
print(paras[0].prettify())
This paragraph as a color of blue.
print(paras[1].prettify())
This paragraph does not have a color.
```
As you can see for some reason the first paragraph of `This has a **color** of red. Because it likes the color red` is not being picked up by `find_all(...)` and I cannot figure out why not.<issue_comment>username_1: If you want to grap the `'p'` you can just do:
```
import re
from bs4 import BeautifulSoup
html = """This has a **color** of red. Because it likes the color red
This paragraph has a color of blue.
This paragraph does not have a color.
"""
soup = BeautifulSoup(html, 'html.parser')
paras = soup.find_all('p')
for p in paras:
print (p.get_text())
```
Upvotes: 0 <issue_comment>username_2: I haven't actually figured out why specifying the string (or text for older versions of BeautifulSoup) parameter of `find_all(...)` doesn't give me what I want but, the following does give me a generalized solution.
```
pattern = re.compile('color', flags=re.UNICODE+re.IGNORECASE)
desired_tags = [tag for tag in soup.find_all('p') if pattern.search(tag.text) is not None]
```
Upvotes: 0 <issue_comment>username_3: The `string` property expects the tag to contain only text and not tags. If you try printing `.string` for the first `p` tag, it'll return `None`, since, it has tags in it.
Or, to explain it better, the [documentation](https://www.crummy.com/software/BeautifulSoup/bs4/doc/#string) says:
>
> If a tag has only one child, and that child is a `NavigableString`, the child is made available as `.string`
>
>
> If a tag contains more than one thing, then it’s not clear what `.string` should refer to, so `.string` is defined to be `None`
>
>
>
The way to overcome this, is to use a `lambda` function.
```
html = """This has a **color** of red. Because it likes the color red
This paragraph has a color of blue.
This paragraph does not have a color.
"""
soup = BeautifulSoup(html, 'html.parser')
first_p = soup.find('p')
print(first_p)
# This has a **color** of red. Because it likes the color red
print(first_p.string)
# None
print(first_p.text)
# This has a color of red. Because it likes the color red
paras = soup.find_all(lambda tag: tag.name == 'p' and 'color' in tag.text.lower())
print(paras)
# [This has a **color** of red. Because it likes the color red
, This paragraph has a color of blue.
, This paragraph does not have a color.
]
```
Upvotes: 2
|
2018/03/17
| 539 | 2,004 |
<issue_start>username_0: I am using Openresty as a server. I have the configuration file of the nginx as per the <https://eclipsesource.com/blogs/2018/01/11/authenticating-reverse-proxy-with-keycloak/>.
I am getting following error "openidc.lua:1053: authenticate(): request to the redirect\_uri\_path but there's no session state found, client"
Can someone throw some light and try to solve the problem.
Regards,
Allahbaksh<issue_comment>username_1: Your redirect URI must not be set to `"/"` but to some arbitrary path that is not supposed to return content (like `/redirect_uri`). It is a "vanity" URL that is handled by `lua-resty-openidc`
Upvotes: 3 <issue_comment>username_2: I had the same problem and was able to fix it by setting the $session\_name variable in the server block. Example:
```lua
server {
...
server_name proxy.localhost;
#lua_code_cache off;
set $session_name nginx_session;
location / {
access_by_lua_block {
local opts = {
redirect_uri = "http://proxy.localhost/cb",
discovery = "http://127.0.0.1:9000/.well-known/openid-configuration",
client_id = "proxyclient-id",
client_secret = "secret",
ssl_verify = "no",
scope = "openid"
}
-- call authenticate for OpenID Connect user authentication
local res, err = require("resty.openidc").authenticate(opts)
if err then
ngx.status = 500
ngx.say(err)
ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
end
ngx.req.set_header("X-USER", res.id_token.sub)
}
proxy_pass http://localhost:8080/;
proxy_set_header x-forwarded-proto $scheme;
}
}
```
Another thing to pay attention to is the lua\_code\_cache off directive; It could break the session. See: <https://github.com/bungle/lua-resty-session#notes-about-turning-lua-code-cache-off>
Upvotes: 1
|
2018/03/17
| 556 | 1,732 |
<issue_start>username_0: I am attempting to sort an array of objects by a name property that exists on each object. When using the `sort()` method with the code below I am getting the following error:
`ERROR ReferenceError: b is not defined`
Here is my code:
```
myArray.sort( (a, b) => {
return (typeof a.name: string === 'string') - (typeof b.name === 'string')|| a.name - b.name || a.name.localeCompare(b.name)};
```
Here is what is odd though...
When I run:
```
myArray.sort( (a, b) => {
console.log(a.name);
console.log(b.name);
```
It logs the names perfectly fine. What am I missing??
**Just to be a thorough little bit of context:**
I am using this method after doing an HTTP call from an angular service.ts file and this array is being passed to my component and subscribed to. And I am using Angular, so this would be Typescript compiling to JavaScript. I also have another `myArray.forEach()` method just below my `sort()` method and that is working.<issue_comment>username_1: Is this what you want?
```js
var a = [
{ name: "John" },
{ name: "Jack" },
{ name: "Bob" }
];
a.sort(function (a, b) {
if (a.name > b.name) return 1;
if (a.name < b.name) return -1;
return 0;
});
console.log(a);
```
Upvotes: 5 [selected_answer]<issue_comment>username_2: You could use a comparison which works independently of the type of string or number, by moving numerical values to top.
```js
var array = [{ name: 20 }, { name: 21 }, { name: 2 }, { name: 11 }, { name: 1 }, { name: 'John' }, { name: 'Jane' }, { name: 'Frank' }, { name: 'Mary' },]
array.sort((a, b) => (typeof a.name === 'string') - (typeof b.name === 'string') || a.name > b.name || -(a.name < b.name));
console.log(array);
```
Upvotes: 1
|
2018/03/17
| 1,925 | 6,283 |
<issue_start>username_0: I have written a function for computing volume of intersection of a cube and a half-space and now I'm writing tests for it.
I've tried computing the volume numerically like this:
```
integral = scipy.integrate.tplquad(lambda z, y, x: int(Vector(x, y, z).dot(normal) < distance),
-0.5, 0.5,
lambda x: -0.5, lambda x: 0.5,
lambda x, y: -0.5, lambda x, y: 0.5,
epsabs=1e-5,
epsrel=1e-5)
```
... basically I integrate over the whole cube and each point gets value 1 or 0 based on if it is inside the half space.
This gets very slow (more than several seconds per invocation) and keeps giving me warnings like
```
scipy.integrate.quadpack.IntegrationWarning: The integral is probably divergent, or slowly convergent
```
Is there a better way to calculate this volume?<issue_comment>username_1: If we assume that boundary of the half-space is given by $\{(x, y, z) \mid ax + by + cz + d = 0 \}$ with $c \not= 0$, and that the half-space of interest is that below the plane (in the $z$-direction), then your integral is given by
```
scipy.integrate.tplquad(lambda z, y, x: 1,
-0.5, 0.5,
lambda x: -0.5, lambda x: 0.5,
lambda x, y: -0.5, lambda x, y: max(-0.5, min(0.5, -(b*y+a*x+d)/c)))
```
Since at least one of $a$, $b$, and $c$ must be non-zero, the case $c = 0$ can be handled by changing coordinates.
Upvotes: 1 <issue_comment>username_2: ### Integration
Integration of a discontinuous function is problematic, especially in multiple dimension. Some preliminary work, reducing the problem to an integral of a continuous function, is needed. Here I work out the height (top-bottom) as a function of x and y, and use `dblquad` for that: it returns in `36.2 ms`.
I express the plane equations as `a*x + b*y + c*z = distance`. Some care is needed with the sign of c, as the plane could be a part of the top or of the bottom.
```
from scipy.integrate import dblquad
distance = 0.1
a, b, c = 3, -4, 2 # normal
zmin, zmax = -0.5, 0.5 # cube bounds
# preprocessing: make sure that c > 0
# by rearranging coordinates, and flipping the signs of all if needed
height = lambda y, x: min(zmax, max(zmin, (distance-a*x-b*y)/c)) - zmin
integral = dblquad(height, -0.5, 0.5,
lambda x: -0.5, lambda x: 0.5,
epsabs=1e-5, epsrel=1e-5)
```
### Monte Carlo methods
Picking sample points at random (Monte Carlo method) avoids the issues with discontinuity: the accuracy is about the same for discontinuous as for continuous functions, the error decreases at the rate `1/sqrt(N)` where N is the number of sample points.
The [polytope package](https://tulip-control.github.io/polytope/) uses it internally. With it, a computation could go as
```
import numpy as np
import polytope as pc
a, b, c = 3, 4, -5 # normal vector
distance = 0.1
A = np.concatenate((np.eye(3), -np.eye(3), [[a, b, c]]), axis=0)
b = np.array(6*[0.5] + [distance])
p = pc.Polytope(A, b)
print(p.volume)
```
Here A and b encode the halfspaces as `Ax<=b`: the first fix rows are for faces of the cube, the last is for the plane.
To have more control over precision, either implement Monte-Carlo method yourself (easy) or use [`mcint`](https://pypi.python.org/pypi/mcint/) package (about as easy).
### Polytope volume: a task for linear algebra, not for integrators
You want to compute the volume of a **polytope**, a convex body formed by intersecting halfspaces. This ought to have an algebraic solution. SciPy has [HalfspaceIntersection](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.HalfspaceIntersection.html#scipy.spatial.HalfspaceIntersection) class for these but so far (1.0.0) does not implement finding the volume of such an object. If you could find the vertices of the polytope, then the [ConvexHull](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.ConvexHull.html#scipy.spatial.ConvexHull) class could be used to compute the volume. But as is, it seems that SciPy spatial module is no help. Maybe in a future version of SciPy...
Upvotes: 2 [selected_answer]<issue_comment>username_3: Integration of characteristic function is mathematically correct, but not practical. That is because most integration schemes are designed to integrate *polynomials* to some degree exactly, and in consequence all "relatively smooth" functions reasonably well. Characteristic functions, however, are everything but smooth. Polynomial-style integration will get you nowhere.
A much better-suited approach is to build a discretized version of the domain first, and then simply sum up the volumes of the little tetrahedra.
Discretization in 3D can be done, for example, with [pygalmesh](https://github.com/nschloe/pygalmesh) (a project of mine interfacing [CGAL](https://www.cgal.org/)). The below code discretizes the cut-off cube to
[](https://i.stack.imgur.com/YJORq.png)
You can increase the precision by decreasing `max_cell_circumradius` and/or `max_edge_size_at_feature_edges`, but meshing will take longer then. Moreover, you could specify "feature edges" to resolve the intersection edges exactly. This would give you the exactly correct result even with the coarsest cell size.
```py
import pygalmesh
import numpy
c = pygalmesh.Cuboid([0, 0, 0], [1, 1, 1])
h = pygalmesh.HalfSpace([1.0, 2.0, 3.0], 4.0, 10.0)
u = pygalmesh.Intersection([c, h])
mesh = pygalmesh.generate_mesh(
u, max_cell_circumradius=3.0e-2, max_edge_size_at_feature_edges=1.0e-2
)
def compute_tet_volumes(vertices, tets):
cell_coords = vertices[tets]
a = cell_coords[:, 1, :] - cell_coords[:, 0, :]
b = cell_coords[:, 2, :] - cell_coords[:, 0, :]
c = cell_coords[:, 3, :] - cell_coords[:, 0, :]
# omega =
omega = numpy.einsum("ij,ij->i", a, numpy.cross(b, c))
# https://en.wikipedia.org/wiki/Tetrahedron#Volume
return abs(omega) / 6.0
vol = numpy.sum(compute\_tet\_volumes(mesh.points, mesh.get\_cells\_type("tetra")))
print(f"{vol:.8e}")
```
```
8.04956436e-01
```
Upvotes: 2
|
2018/03/17
| 809 | 2,697 |
<issue_start>username_0: I'm trying to learn async programming in C++. In Python, we have `await`, with which we can resume a function from that point, but in C++ `future` waits for the results and halts the next line of code. What if we don't want to get the result, but instead continue to the next line of code? How can I do this?<issue_comment>username_1: >
> future waits for the results and halts the next line of code
>
>
>
This is only true when you invoke `.get()` or when the future is being destroyed. You can run multiple tasks in parallel with `std::future`:
```
std::future f = std::async(std::launch::async, foo);
auto res0 = bar();
auto res1 = f.get();
```
In the example above, `bar` and `foo` will run in parallel.
---
If you want to attach asynchronous continuations to an existing future, currently you cannot do that with `std::future`.
`boost::future` supports non-blocking `.then(...)`, `.when_all(...)`, and `.when_any(...)` continuations. These are proposed for standardization in ["Extensions for concurrency"](http://en.cppreference.com/w/cpp/experimental/concurrency).
There's also a ["Coroutines" TS](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/n4723.pdf) that aims to introduce resumable functions and `co_await`/`co_yield`.
Unsurprisingly, `boost` also provides a coroutine library that can be used today to implement resumable functions.
Upvotes: 2 <issue_comment>username_2: You can use [`std::future::wait_for`](http://en.cppreference.com/w/cpp/thread/future/wait_for) to check if the task has completed execution, *e.g.*:
```
if (future.wait_for(100ms) == std::future_status::ready) {
// Result is ready.
} else {
// Do something else.
}
```
The [Concurrency TS](http://en.cppreference.com/w/cpp/experimental/concurrency) includes [`std::future::is_ready`](http://en.cppreference.com/w/cpp/experimental/future/is_ready) (may be included in C++20), which is non blocking. If it gets included in the standard, usage will be something like:
```
auto f = std::async(std::launch::async, my_func);
while (!f.is_ready()) {
/* Do other stuff. */
}
auto result = f.get();
/* Do stuff with result. */
```
Alternatively, the Concurrency TS also includes [`std::future::then`](http://en.cppreference.com/w/cpp/experimental/future/then), which I interpret can be used *e.g.* as:
```
auto f = std::async(std::launch::async, my_func)
.then([] (auto fut) {
auto result = fut.get();
/* Do stuff when result is ready. */
});
/* Do other stuff before result is ready. */
```
Also see: [How to check if a std::thread is still running?](https://stackoverflow.com/q/9094422/873025)
Upvotes: 4 [selected_answer]
|
2018/03/17
| 938 | 2,983 |
<issue_start>username_0: I have a class:
```
class Cave
{
private:
int no_of_rooms;
public:
vectorrooms;
Cave(int r);
Cave(){};
};
```
The constructor of Cave fills the vector rooms with random integers:
```
Cave::Cave(int r)
:no_of_rooms{ r }
{
int i = 0;
while (i
```
I create another class:
```
class Player : public Cave
{
public:
Player(int *plyr)
:p_ptr{ plyr }
{
plyr = &rooms[0];
}
private:
int* p_ptr = nullptr;
};
```
This Player class seems to be a mess. I am trying to get access to the same rooms vector filled up by the Cave constructor.<issue_comment>username_1: >
> future waits for the results and halts the next line of code
>
>
>
This is only true when you invoke `.get()` or when the future is being destroyed. You can run multiple tasks in parallel with `std::future`:
```
std::future f = std::async(std::launch::async, foo);
auto res0 = bar();
auto res1 = f.get();
```
In the example above, `bar` and `foo` will run in parallel.
---
If you want to attach asynchronous continuations to an existing future, currently you cannot do that with `std::future`.
`boost::future` supports non-blocking `.then(...)`, `.when_all(...)`, and `.when_any(...)` continuations. These are proposed for standardization in ["Extensions for concurrency"](http://en.cppreference.com/w/cpp/experimental/concurrency).
There's also a ["Coroutines" TS](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/n4723.pdf) that aims to introduce resumable functions and `co_await`/`co_yield`.
Unsurprisingly, `boost` also provides a coroutine library that can be used today to implement resumable functions.
Upvotes: 2 <issue_comment>username_2: You can use [`std::future::wait_for`](http://en.cppreference.com/w/cpp/thread/future/wait_for) to check if the task has completed execution, *e.g.*:
```
if (future.wait_for(100ms) == std::future_status::ready) {
// Result is ready.
} else {
// Do something else.
}
```
The [Concurrency TS](http://en.cppreference.com/w/cpp/experimental/concurrency) includes [`std::future::is_ready`](http://en.cppreference.com/w/cpp/experimental/future/is_ready) (may be included in C++20), which is non blocking. If it gets included in the standard, usage will be something like:
```
auto f = std::async(std::launch::async, my_func);
while (!f.is_ready()) {
/* Do other stuff. */
}
auto result = f.get();
/* Do stuff with result. */
```
Alternatively, the Concurrency TS also includes [`std::future::then`](http://en.cppreference.com/w/cpp/experimental/future/then), which I interpret can be used *e.g.* as:
```
auto f = std::async(std::launch::async, my_func)
.then([] (auto fut) {
auto result = fut.get();
/* Do stuff when result is ready. */
});
/* Do other stuff before result is ready. */
```
Also see: [How to check if a std::thread is still running?](https://stackoverflow.com/q/9094422/873025)
Upvotes: 4 [selected_answer]
|
2018/03/17
| 1,175 | 4,554 |
<issue_start>username_0: MVC application (ASP.NET MVC, client: jquery).
Problem: The second ajax-request wait, when the first ajax request will done.
I need, when the first and the second ajax-requests executes immediatly in one time.
The page sends to server to determine the count of records (the first ajax-request), very long (~5-7 seconds).
The operator click the buttom to open the card to edit it (the second ajax-request, fast, get the Dto-model).
The user doesn't need to wait the first request, he wants to work immediatly.
As a result, in Chrome in network page, two requests in status 'pending'. The second waits the first.
Question, how can I send requests, to execute asynchronously ?
The first ajax-request:
```
`window.jQuery`.ajax({
type: 'POST',
url: Url.Action("GetCountBooks", "Book");
contentType: "application/json; charset=utf-8",
dataType: 'json',
data: JSON.stringify({ typeBook: "...", filter: "..." };),
success: function (data) {
// show in UI page the count of books by filter and params
},
error: function (data) {
//show error
}});
public class BookController : Controller
{
[HttpPost]
public NJsonResult GetCountBooks(string typeBook, Filter filter)
{
var data = DbProvider.GetCountBooks(typeBook, filter)
if (data.Result == ResultType.Success)
{
var count = data.Data;
return new NJsonResult
{
Data = new { Data = count }
};
}
return new NJsonResult
{
Data = new { Error = "Error while counting the books." }
};
}
}
```
The second ajax-request:
```
`window.jQuery`.ajax({
type: 'POST',
dataType: 'json',
contentType: "application/json",
url: Url.Action("GetBookById", "Book"),
data: JSON.stringify({ id: bookId }),
success: function (data) {
// show jquery dialog form to edit dto-model.
},
error: function (data) {
//show error
}});
public class BookController : Controller
{
[HttpPost]
public NJsonResult GetBookById(int id)
{
var data = DbProvider.GetBookById(id)
if (data.Result == ResultType.Success)
{
var book = data.Data;
return new NJsonResult
{
Data = new { Data = book }
};
return new NJsonResult
{
Data = new { Error = "The book is not found." }
};
}
return new NJsonResult
{
Data = new { Error = "Error while getting the book." }
};
}
}
```
I Cannot union ajax requests into one! The user can send various second request.<issue_comment>username_1: You need a fork-join splitter to fork 2 tasks and join based on some condition.
For example here is my implementation:
```
function fork(promises) {
return {
join: (callback) => {
let numOfTasks = promises.length;
let forkId = Math.ceil(Math.random() * 1000);
fork_join_map[forkId] = {
expected: numOfTasks,
current: 0
};
promises.forEach((p) => {
p.then((data) => {
fork_join_map[forkId].current++;
if (fork_join_map[forkId].expected === fork_join_map[forkId].current) {
if (callback) callback(data)
}
})
});
}
}}
```
Pass any number of async tasks (promises) into `fork` method and `join` when all are done. The **done** criteria here is managed by simple global object `fork_join_map` which tracks the results of your fork-join process (global is not good but its just an example). The particular fork-join is identified by `forkId` which is 0..1000 in this example which is not quite good again, but I hope you got the idea.
With jQuery you can create promise with `$.when( $.ajax(..your ajax call) )`
In the end you can join your promises like this
```
fork([
$.when( $.ajax(..your ajax call 1) ),
$.when( $.ajax(..your ajax call 2) )
]).join(() => {
// do your logic here when both calls are done
});
```
It's my own implementation, there may be already-written library functions for this in jQuery - I dont know. Hope this will give you a right direction at least.
Upvotes: 2 <issue_comment>username_2: The solution is to add attribute to Asp Controller: [SessionState(System.Web.SessionState.SessionStateBehavior.ReadOnly)]
<http://johnculviner.com/asp-net-concurrent-ajax-requests-and-session-state-blocking/>
Upvotes: 1 [selected_answer]
|
2018/03/17
| 740 | 2,577 |
<issue_start>username_0: I would like to enter 3 players with 3 scores in using 2 arrays.
For now I stuck, concerning the ranking; how to do ???
Small example:
```
Player 1 : Jeremy
Score Jeremy : 12
Player 2 : Julien
Score Julien : 18
Player 3 : Olivia
Score Olivia : 22
```
For the ranking we should have
```
The first => Olivia with 22 scores
The second => Julien with 18 scores
The third => Jeremy with 12 scores
```
Here is my code.
```
function main() {
var players = new Array();
var scores = new Array();
for(i = 0; i<3; i++) {
players[i] = prompt("Player " + (i+1) + " : ");
scores[i] = prompt("Score " + (players[i]) + " : ");
}
}
```
Thank you advance.<issue_comment>username_1: You need a fork-join splitter to fork 2 tasks and join based on some condition.
For example here is my implementation:
```
function fork(promises) {
return {
join: (callback) => {
let numOfTasks = promises.length;
let forkId = Math.ceil(Math.random() * 1000);
fork_join_map[forkId] = {
expected: numOfTasks,
current: 0
};
promises.forEach((p) => {
p.then((data) => {
fork_join_map[forkId].current++;
if (fork_join_map[forkId].expected === fork_join_map[forkId].current) {
if (callback) callback(data)
}
})
});
}
}}
```
Pass any number of async tasks (promises) into `fork` method and `join` when all are done. The **done** criteria here is managed by simple global object `fork_join_map` which tracks the results of your fork-join process (global is not good but its just an example). The particular fork-join is identified by `forkId` which is 0..1000 in this example which is not quite good again, but I hope you got the idea.
With jQuery you can create promise with `$.when( $.ajax(..your ajax call) )`
In the end you can join your promises like this
```
fork([
$.when( $.ajax(..your ajax call 1) ),
$.when( $.ajax(..your ajax call 2) )
]).join(() => {
// do your logic here when both calls are done
});
```
It's my own implementation, there may be already-written library functions for this in jQuery - I dont know. Hope this will give you a right direction at least.
Upvotes: 2 <issue_comment>username_2: The solution is to add attribute to Asp Controller: [SessionState(System.Web.SessionState.SessionStateBehavior.ReadOnly)]
<http://johnculviner.com/asp-net-concurrent-ajax-requests-and-session-state-blocking/>
Upvotes: 1 [selected_answer]
|
2018/03/17
| 1,157 | 3,763 |
<issue_start>username_0: Using the NAudio framework, I have written code like this to play some midi notes:
```
// Guts of a play note method which takes a cancellation token, a note
// a channel and a duration - CurrentVolume is a property of the class
// that plays the notes
midiOut.Send(MidiMessage.StartNote(note, CurrentVolume, channel).RawData);
try
{
await Task.Delay(duration, cancellationToken);
}
finally
{
midiOut.Send(MidiMessage.StopNote(note, CurrentVolume, channel).RawData);
}
```
And this is working okay, but every so often there is a little skip/delay in the rendering which I am assuming is coming from the Task.Delay not always being exact. What I would like to do is just generate a midi collection and send the whole collection to the midi out device, but I can't seem to find a way to do that. I have gotten as far as generating a collection and I know how to save that to a file - so if the solution is to make a file and then somehow send the file, this is also acceptable.
```
var collection = new MidiEventCollection(0, 120);
collection.AddEvent(new NoteOnEvent(0, 1, 64, 127, 15), 1);
collection.AddEvent(new NoteOnEvent(15, 1, 65, 127, 15), 1);
collection.AddEvent(new NoteOnEvent(30, 1, 66, 127, 15), 1);
collection.AddEvent(new NoteOnEvent(45, 1, 67, 127, 15), 1);
collection.AddEvent(new NoteOnEvent(60, 1, 68, 127, 15), 1);
collection.PrepareForExport();
```<issue_comment>username_1: There is a Windows API that lets you emit batches of MIDI events (see [midiStreamOut](https://msdn.microsoft.com/en-us/library/dd798487(v=vs.85).aspx) for example) which would be ideal for this scenario but unfortunately NAudio does not contain wrappers for this. NAudio's MIDI capabilities are more focused on reading and writing MIDI files. Your options are either to create the p/invoke wrappers for the MIDI APIs I mentioned or to try a different audio library such as [MIDI.NET](https://github.com/obiwanjacobi/midi.net)
Upvotes: 2 [selected_answer]<issue_comment>username_2: Using [DryWetMIDI](https://github.com/melanchall/drywetmidi) you can write this code:
```
using Melanchall.DryWetMidi.Common;
using Melanchall.DryWetMidi.Devices;
using Melanchall.DryWetMidi.Core;
using Melanchall.DryWetMidi.Interaction;
// ...
var eventsToPlay = new MidiEvent[]
{
new NoteOnEvent((SevenBitNumber)100, SevenBitNumber.MaxValue) { Channel = (FourBitNumber)10 },
new NoteOffEvent((SevenBitNumber)100, SevenBitNumber.MinValue) { Channel = (FourBitNumber)10 },
// ...
};
using (var outputDevice = OutputDevice.GetByName("Microsoft GS Wavetable Synth"))
using (var playback = new Playback(eventsToPlay, TempoMap.Default, outputDevice))
{
playback.Play();
}
```
Or if you just need to play notes on single channel, you can use [Pattern](https://melanchall.github.io/drywetmidi/articles/composing/Pattern.html):
```
using MusicTheory = Melanchall.DryWetMidi.MusicTheory;
using Melanchall.DryWetMidi.Composing;
// ...
var pattern = new PatternBuilder()
.Note(MusicTheory.Octave.Get(3).ASharp, length: MusicalTimeSpan.Quarter)
.Note(MusicTheory.Octave.Get(3).C, length: MusicalTimeSpan.Eighth)
// ...
.Build();
using (var outputDevice = OutputDevice.GetByName("Microsoft GS Wavetable Synth"))
{
pattern.Play(TempoMap.Default, (FourBitNumber)10, outputDevice);
}
```
Note that `Play` will block calling thread until all MIDI data played. For non-blocking playback use `Start` method of the `Playback` class:
```
var playback = pattern.GetPlayback(TempoMap.Default, (FourBitNumber)10, outputDevice);
playback.Start();
```
You can read more about playing MIDI data on the [Playback](https://melanchall.github.io/drywetmidi/articles/playback/Overview.html) page of the library docs.
Upvotes: 0
|
2018/03/17
| 1,043 | 3,501 |
<issue_start>username_0: I've got two types: User and UserLeague
```
type User {
id: ID! @unique
email: String! @unique
password: String!
name: String!
predictions: [Prediction!]!
leagues: [UserLeague!]! @relation(name: "MemberOfLeague")
ownedLeagues: [UserLeague!]! @relation(name: "LeagueOwner")
}
type UserLeague {
id: ID! @unique
passcode: String!
name: String!
history: [Prediction!]!
users: [User!]! @relation(name: "MemberOfLeague")
owner: User! @relation(name: "LeagueOwner")
}
```
Which to me looks an awful lot like it's analogous to the example in the prisma docs here <https://www.prismagraphql.com/docs/reference/service-configuration/data-modelling-(sdl)-eiroozae8u#the-@relation-directive> with the `User` and `Story` types.
But when I try to deploy with `yarn prisma deploy`, I get the following error message:
>
> "message": "There is a relation ambiguity during the migration. Please first name the old relation on your schema. The ambiguity is on a relation between User and UserLeague."
>
>
>
I don't quite see what I'm missing. Thanks very much for your help :)<issue_comment>username_1: There is a Windows API that lets you emit batches of MIDI events (see [midiStreamOut](https://msdn.microsoft.com/en-us/library/dd798487(v=vs.85).aspx) for example) which would be ideal for this scenario but unfortunately NAudio does not contain wrappers for this. NAudio's MIDI capabilities are more focused on reading and writing MIDI files. Your options are either to create the p/invoke wrappers for the MIDI APIs I mentioned or to try a different audio library such as [MIDI.NET](https://github.com/obiwanjacobi/midi.net)
Upvotes: 2 [selected_answer]<issue_comment>username_2: Using [DryWetMIDI](https://github.com/melanchall/drywetmidi) you can write this code:
```
using Melanchall.DryWetMidi.Common;
using Melanchall.DryWetMidi.Devices;
using Melanchall.DryWetMidi.Core;
using Melanchall.DryWetMidi.Interaction;
// ...
var eventsToPlay = new MidiEvent[]
{
new NoteOnEvent((SevenBitNumber)100, SevenBitNumber.MaxValue) { Channel = (FourBitNumber)10 },
new NoteOffEvent((SevenBitNumber)100, SevenBitNumber.MinValue) { Channel = (FourBitNumber)10 },
// ...
};
using (var outputDevice = OutputDevice.GetByName("Microsoft GS Wavetable Synth"))
using (var playback = new Playback(eventsToPlay, TempoMap.Default, outputDevice))
{
playback.Play();
}
```
Or if you just need to play notes on single channel, you can use [Pattern](https://melanchall.github.io/drywetmidi/articles/composing/Pattern.html):
```
using MusicTheory = Melanchall.DryWetMidi.MusicTheory;
using Melanchall.DryWetMidi.Composing;
// ...
var pattern = new PatternBuilder()
.Note(MusicTheory.Octave.Get(3).ASharp, length: MusicalTimeSpan.Quarter)
.Note(MusicTheory.Octave.Get(3).C, length: MusicalTimeSpan.Eighth)
// ...
.Build();
using (var outputDevice = OutputDevice.GetByName("Microsoft GS Wavetable Synth"))
{
pattern.Play(TempoMap.Default, (FourBitNumber)10, outputDevice);
}
```
Note that `Play` will block calling thread until all MIDI data played. For non-blocking playback use `Start` method of the `Playback` class:
```
var playback = pattern.GetPlayback(TempoMap.Default, (FourBitNumber)10, outputDevice);
playback.Start();
```
You can read more about playing MIDI data on the [Playback](https://melanchall.github.io/drywetmidi/articles/playback/Overview.html) page of the library docs.
Upvotes: 0
|
2018/03/17
| 1,131 | 3,193 |
<issue_start>username_0: My code below works to convert phone letters to phone numbers, but I didn't know if there was a better way to do this:
```
def phone_letter_converter(self):
if not self.isdigit():
number_upper = self.upper()
new_number = ""
for ch in number_upper:
if ch == 'A' or ch == 'B' or ch == 'C':
new_number += '2'
elif ch == 'D' or ch == 'E' or ch == 'F':
new_number += '3'
elif ch == 'G' or ch == 'H' or ch == 'I':
new_number += '4'
elif ch == 'J' or ch == 'K' or ch == 'L':
new_number += '5'
elif ch == 'M' or ch == 'N' or ch == 'O':
new_number += '6'
elif ch == 'P' or ch == 'Q' or ch == 'R' or ch == 'S':
new_number += '7'
elif ch == 'T' or ch == 'U' or ch == 'V':
new_number += '8'
elif ch == 'W' or ch == 'X' or ch == 'Y' or ch == 'Z':
new_number += '9'
else:
new_number += ch
return new_number
```<issue_comment>username_1: You could use the `in` operator :
So instead of `if ch == 'A' or ch == 'B' or ch == 'C':`, `if ch in 'ABC'` will check if (for example) `A` is in `ABC`.
Or, you could use a [dictionary](https://docs.python.org/3/tutorial/datastructures.html) here:
```
conversion_dict = {'A': '2', 'B': '2', 'C': '2',\
'D': '3', 'E': '3', 'F': '3',\
'G': '4', 'H': '4', 'I': '4',\
'J': '5', 'K': '5', 'L': '5',\
'M': '6', 'N': '6', 'O': '6',\
'P': '7', 'Q': '7', 'R': '7', 'S': '7',\
'T': '8', 'U': '9', 'V': '8',\
'W': '9', 'X': '9', 'Y': '9', 'Z': '9'}
number_letters = 'KJLWABJEKF'
new_number = ""
for c in number_letters:
new_number += conversion_dict[c]
print(new_number) # 5559225353
```
This can be shortened even further by using a [`"".join(...)`](https://docs.python.org/3/library/stdtypes.html#str.join) and a [list comprehension](https://docs.python.org/3.6/tutorial/datastructures.html#list-comprehensions):
```
number_letters = 'KJLWABJEKF'
new_number = ''.join([conversion_dict[c] for c in number_letters]) # 5559225353
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can use a [dict](https://docs.python.org/3/tutorial/datastructures.html#dictionaries) to map that like:
### Code:
```
letter_to_number = dict(
A=2, B=2, C=2, D=3, E=3, F=3,
G=4, H=4, I=4, J=5, K=5, L=5,
M=6, N=6, O=6, P=7, Q=7, R=7, S=7,
T=8, U=8, V=9, W=9, X=9, Y=9, Z=9,
)
letter_to_number = {k: str(v) for k, v in letter_to_number.items()}
letter_to_number.update({k.lower(): v for k, v in letter_to_number.items()})
def phone_letter_converter(letters_in):
return ''.join(letter_to_number.get(l, l) for l in letters_in)
```
This works by building a initial dict of uppercase letters to numbers. Then converts the numbers to letters of those numbers, and then adds the lower case version of the lookup.
### Test Code:
```
print(phone_letter_converter('AbCdZ'))
```
### Result:
```
22239
```
Upvotes: 0
|
2018/03/17
| 932 | 2,569 |
<issue_start>username_0: When using `slcli` to list virtual servers in a Softlayer account:
```
slcli vm list
```
it takes a long time and eventually produces the following error:
```
SoftLayerAPIError(SOAP-ENV:Server): Internal Error
```
This used to work in the same account. Listing VMs in specific data centers works OK still which makes me think that I'm hitting some built-in limit on the number of objects that can be returned... Is there a limit and if there is what is it?<issue_comment>username_1: You could use the `in` operator :
So instead of `if ch == 'A' or ch == 'B' or ch == 'C':`, `if ch in 'ABC'` will check if (for example) `A` is in `ABC`.
Or, you could use a [dictionary](https://docs.python.org/3/tutorial/datastructures.html) here:
```
conversion_dict = {'A': '2', 'B': '2', 'C': '2',\
'D': '3', 'E': '3', 'F': '3',\
'G': '4', 'H': '4', 'I': '4',\
'J': '5', 'K': '5', 'L': '5',\
'M': '6', 'N': '6', 'O': '6',\
'P': '7', 'Q': '7', 'R': '7', 'S': '7',\
'T': '8', 'U': '9', 'V': '8',\
'W': '9', 'X': '9', 'Y': '9', 'Z': '9'}
number_letters = 'KJLWABJEKF'
new_number = ""
for c in number_letters:
new_number += conversion_dict[c]
print(new_number) # 5559225353
```
This can be shortened even further by using a [`"".join(...)`](https://docs.python.org/3/library/stdtypes.html#str.join) and a [list comprehension](https://docs.python.org/3.6/tutorial/datastructures.html#list-comprehensions):
```
number_letters = 'KJLWABJEKF'
new_number = ''.join([conversion_dict[c] for c in number_letters]) # 5559225353
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can use a [dict](https://docs.python.org/3/tutorial/datastructures.html#dictionaries) to map that like:
### Code:
```
letter_to_number = dict(
A=2, B=2, C=2, D=3, E=3, F=3,
G=4, H=4, I=4, J=5, K=5, L=5,
M=6, N=6, O=6, P=7, Q=7, R=7, S=7,
T=8, U=8, V=9, W=9, X=9, Y=9, Z=9,
)
letter_to_number = {k: str(v) for k, v in letter_to_number.items()}
letter_to_number.update({k.lower(): v for k, v in letter_to_number.items()})
def phone_letter_converter(letters_in):
return ''.join(letter_to_number.get(l, l) for l in letters_in)
```
This works by building a initial dict of uppercase letters to numbers. Then converts the numbers to letters of those numbers, and then adds the lower case version of the lookup.
### Test Code:
```
print(phone_letter_converter('AbCdZ'))
```
### Result:
```
22239
```
Upvotes: 0
|
2018/03/17
| 566 | 1,833 |
<issue_start>username_0: I have navigation bar that collapses on windows resize. Besides the usual stuff in nav bar like about, contact...etc, i have two social media links (facebook and twitter). After i resize window, both of the social media items jump to the right of collapsed bar and also show up in the "dropdown" of the collapsed bar.
I would like for the social media items to show on the left of the so called dropdown button and for them to not show up in the collapsed bar.
I've tried this by sourcing them outside of collapsed bar div, but nothing changes.
Any help?
```
[](#)
* [Home (current)](#)
* [About](#)
* [Our team](#)
* [Contact](#)
*
*
```
For visual:
<https://jsfiddle.net/8ay2g0tL/3/>
Thank you!
**RESULT**
im not sure if this is the best practice or not, but the way i finally was able to hide it from collapsed navbar is by setting it to display: none
like this:
```
@media only screen and (max-width: 998px) {
#navbarfbtw{
display:none;
}
```
}<issue_comment>username_1: Just wrap the `class="navbar-social"` inside the parent `class="collapse navbar-collapse"`, Also there is no reason to use so just add social `li` items *(with `navbar-social` class)* in the parent `ul` element like this:
[**Updated JsFiddle**](https://jsfiddle.net/us3tsLx2/3/)
```html
[](#)
* [Home (current)](#)
* [About](#)
* [Our team](#)
* [Contact](#)
*
*
```
Upvotes: 1 <issue_comment>username_2: It's to do with the order in which the elements are rendered on the page.
The collapsed menu is being "un-hidden" and shunting your icons out of place.
Simply put the "navbar-social" div BEFORE the "navbarForCollapse" div.
<https://jsfiddle.net/16x8jhu4/9/>
```
*
*
* [Home (current)](#)
* [About](#)
* [Our team](#)
* [Contact](#)
```
Upvotes: 0
|
2018/03/17
| 661 | 2,128 |
<issue_start>username_0: I have a problem with trim, it doesnt work as i expected, when the user writes only spaces in username form ("studentname") it should write you didint fill all fields ("niste izpolnili vsa polja") and i dont know how to achieve that, sorry if question is duplicate but i didnt find the answer to fix my problem
here is the code:
```html
vaja 5: PHP
Registracija
Vnesi ime
Vnesi geslo
Geslo še enkrat
php
if (isset($\_POST["button"]))
{
echo $\_POST['studentname'];
}
?
php
if (isset($\_POST["button"])) {
$studentname = trim( $\_POST['studentname'] );
$password1 = $\_POST ['<PASSWORD>'];
$password2 = $\_POST ['password2']; }
if ($\_POST['studentname'] == "" || $\_POST['password1'] == "" || $\_POST['password2'] == "") {
echo "Niste izpolnili vsa polja";
} else
if ($\_POST['password1']!= $\_POST['password2']) {
echo "Gesli se ne ujemata";
} else {
echo "Registracija uspela";
}
?
```<issue_comment>username_1: You register a variable `$studentname` with the trim result but you actually compare to `$_POST['studentname']`. Try to use `$studentname === ""`.
Upvotes: 1 <issue_comment>username_2: if (trim($\_POST['studentname']) == "" || $\_POST['password1'] == "" || $\_POST['password2'] == "") {
echo "Niste izpolnili vsa polja";
Upvotes: 0 <issue_comment>username_3: I would enhance the following to trim all input fields, not just the `studentname`
```
if (isset($_POST["button"])) {
$studentname = trim($_POST['studentname']);
$password1 = trim($_POST['password1']);
$password2 = trim($_POST['password2']);
}
```
Then make sure write your conditions on the variables, not the original POSTed values. This is the main issue in your current code.
I also rewrote your else/if to be a bit more clear.
```
// ensure that user submitted all fields
if (
studentname == ''
|| $password1 == ''
|| $password2 == ''
) {
echo "Niste izpolnili vsa polja";
}
// ensure that passwords match
elseif($password1 != $password2) {
echo "Gesli se ne ujemata";
}
// validations passed
else {
echo "Registracija uspela";
}
```
Upvotes: 0
|
2018/03/17
| 410 | 1,461 |
<issue_start>username_0: I'd like to know how to implement a textarea where a user can tag people by typing @ and then write a person's name. You get inline suggestions for word matches.
I'm not even sure what the technical name would be for such a feature but it's a common in sites like Facebook.<issue_comment>username_1: You register a variable `$studentname` with the trim result but you actually compare to `$_POST['studentname']`. Try to use `$studentname === ""`.
Upvotes: 1 <issue_comment>username_2: if (trim($\_POST['studentname']) == "" || $\_POST['password1'] == "" || $\_POST['password2'] == "") {
echo "Niste izpolnili vsa polja";
Upvotes: 0 <issue_comment>username_3: I would enhance the following to trim all input fields, not just the `studentname`
```
if (isset($_POST["button"])) {
$studentname = trim($_POST['studentname']);
$password1 = trim($_POST['<PASSWORD>']);
$password2 = trim($_POST['password2']);
}
```
Then make sure write your conditions on the variables, not the original POSTed values. This is the main issue in your current code.
I also rewrote your else/if to be a bit more clear.
```
// ensure that user submitted all fields
if (
studentname == ''
|| $password1 == ''
|| $password2 == ''
) {
echo "Niste izpolnili vsa polja";
}
// ensure that passwords match
elseif($password1 != $password2) {
echo "Gesli se ne ujemata";
}
// validations passed
else {
echo "Registracija uspela";
}
```
Upvotes: 0
|
2018/03/17
| 844 | 1,965 |
<issue_start>username_0: I want to counting data using range base from array using PHP..
Here is my array
```
$array = array(
1.1, 2.2, 3.2, 4.5, 4.2,
5.4, 6.2, 7.1, 8, 9.4, 10,
1.7);
```
My expected output is like (the number of range I set manually, Im just want to count how many data which include my role) :
```
1.00 - 2.50 : 3 data
2.51 - 3.50 : 0 data
3.51 - 4.00 : 1 data
```
I try to write like this but it still cannot works
```
$start = 1;
$end = 2.5;
$int = 0;
foreach ($array as $key => $value) {
if ($start>=$value && $end<=$value) {
$int++;
}
}
echo $int;
```
Please help me to resolve this simple case . Thankyou very much<issue_comment>username_1: If I don't misunderstood your question then this should work for you.
```
$array = array(
1.1, 2.2, 3.2, 4.5, 4.2,
5.4, 6.2, 7.1, 8, 9.4, 10,
1.7);
$count = [];
foreach ($array as $key => $value) {
if ($value';
print\_r($count);
print '
```
';
**Output:**
```
Array
(
[1-2] => 2
[2-3] => 1
[3-4] => 1
[4-5] => 2
[5-6] => 1
[6-7] => 1
[7-8] => 1
[9-10]=> 1
)
```
**See demo:** <https://eval.in/973669>
Upvotes: 0 <issue_comment>username_2: You have to set the `range` because it is consistent values. You can use `array_reduce` and `array_filter` to summarise the data.
```
$array = array(1.1, 2.2, 3.2, 4.5, 4.2,5.4, 6.2, 7.1, 8, 9.4, 10,1.7);
$range = array("1.00-2.50","2.51-3.50","3.51-4.00");
$grouped = array_reduce($range, function($c,$v) use($array) {
$group = array_filter($array,function($x) use ($v){
$t = explode("-",$v);
return $x >= $t[0] && $x <= $t[1];
});
$c[ $v ] = count( $group );
return $c;
},array());
echo "
```
";
print_r( $grouped );
echo "
```
";
```
This will result to:
```
Array
(
[1.00-2.50] => 3
[2.51-3.50] => 1
[3.51-4.00] => 0
)
```
Upvotes: 2 [selected_answer]
|
2018/03/17
| 822 | 2,080 |
<issue_start>username_0: This google maps api 'for loop' returns the duration of a journey as a string (the last line return "34 min" in this example).
```
for (var i = 0; i < originList.length; i++) {
var results = response.rows[i].elements;
geocoder.geocode({
'address': originList[i]
},
showGeocodedAddressOnMap(false));
for (var j = 0; j < results.length; j++) {
geocoder.geocode({
'address': destinationList[j]
},
showGeocodedAddressOnMap(true));
outputDiv.innerHTML += originList[i] + ' to ' + destinationList[j] + ': ' + results[j].distance.text + ' in ' + results[j].duration.text + '
';
}
}
}
```
Is there a chance I can get only numerical values from `results[j].duration.text;` and use them as integers? The final goal is to use the duration in a simple `S = V/t` formula.<issue_comment>username_1: If I don't misunderstood your question then this should work for you.
```
$array = array(
1.1, 2.2, 3.2, 4.5, 4.2,
5.4, 6.2, 7.1, 8, 9.4, 10,
1.7);
$count = [];
foreach ($array as $key => $value) {
if ($value';
print\_r($count);
print '
```
';
**Output:**
```
Array
(
[1-2] => 2
[2-3] => 1
[3-4] => 1
[4-5] => 2
[5-6] => 1
[6-7] => 1
[7-8] => 1
[9-10]=> 1
)
```
**See demo:** <https://eval.in/973669>
Upvotes: 0 <issue_comment>username_2: You have to set the `range` because it is consistent values. You can use `array_reduce` and `array_filter` to summarise the data.
```
$array = array(1.1, 2.2, 3.2, 4.5, 4.2,5.4, 6.2, 7.1, 8, 9.4, 10,1.7);
$range = array("1.00-2.50","2.51-3.50","3.51-4.00");
$grouped = array_reduce($range, function($c,$v) use($array) {
$group = array_filter($array,function($x) use ($v){
$t = explode("-",$v);
return $x >= $t[0] && $x <= $t[1];
});
$c[ $v ] = count( $group );
return $c;
},array());
echo "
```
";
print_r( $grouped );
echo "
```
";
```
This will result to:
```
Array
(
[1.00-2.50] => 3
[2.51-3.50] => 1
[3.51-4.00] => 0
)
```
Upvotes: 2 [selected_answer]
|
2018/03/17
| 1,808 | 6,609 |
<issue_start>username_0: 1.**Introduction:**
>
> So I want to develop a special filter method for uiimages - my idea is to change from one picture all the colors to black except a certain color, which should keep their appearance.
>
>
>
Images are always nice, so look at this image to get what I'd like to achieve:

2.**Explanation:**
I'd like to apply a filter (algorithm) that is able to find specific colors in an image. The algorithm must be able to replace all colors that are not matching to the reference colors with e.g "black".
I've developed a simple code that is able to replace specific colors (color ranges with threshold) in any image.
**But tbh this solution doesn't seems to be a fast & efficient way at all!**
---
```
func colorFilter(image: UIImage, findcolor: String, threshold: Int) -> UIImage {
let img: CGImage = image.cgImage!
let context = CGContext(data: nil, width: img.width, height: img.height, bitsPerComponent: 8, bytesPerRow: 4 * img.width, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)!
context.draw(img, in: CGRect(x: 0, y: 0, width: img.width, height: img.height))
let binaryData = context.data!.assumingMemoryBound(to: UInt8.self),
referenceColor = HEXtoHSL(findcolor) // [h, s, l] integer array
for i in 0.. threshold) {
let setValue: UInt8 = 255
binaryData[pixel] = setValue; binaryData[pixel+1] = setValue; binaryData[pixel+2] = setValue; binaryData[pixel+3] = 255
}
}
}
let outputImg = context.makeImage()!
return UIImage(cgImage: outputImg, scale: image.scale, orientation: image.imageOrientation)
}
```
3.**Code Information** The code above is working quite fine but is absolutely ineffective. Because of all the calculation (especially color conversion, etc.) this code is taking a LONG (too long) time, so have a look at this screenshot:

4. **My question** I'm pretty sure there is a WAY simpler solution of filtering a specific color (with a given threshold `#c6456f is similar to #C6476f, ...`) instead of looping trough EVERY single pixel to compare it's color.
* So what I was thinking about was something like a filter (CIFilter-method) as alternative way to the code on top.
5. **Some Notes**
* So I do not ask you to post any replies that contain suggestions to use the **openCV libary**. I would like to develop this "algorithm" exclusively with Swift.
* The size of the image from which the screenshot was taken over time had a resolution of 500 \* 800px
6. **Thats all**
Did you really read this far? - congratulation, however - **any help how to speed up my code would be very appreciated**! *(Maybe theres a better way to get the pixel color instead of looping trough every pixel)* Thanks a million in advance `:)`<issue_comment>username_1: First thing to do - profile (measure time consumption of different parts of your function). It often shows that time is spent in some unexpected place, and always suggests where to direct your optimization effort. It doesn't mean that you have to focus on that most time consuming thing though, but it will show you where the time is spent. Unfortunately I'm not familiar with Swift so cannot recommend any specific tool.
Regarding iterating through all pixels - depends on the image structure and your assumptions about input data. I see two cases when you can avoid this:
1. When there is some optimized data structure built over your image (e.g. some statistics in its areas). That usually makes sense when you process the same image with same (or similar) algorithm with different parameters. If you process every image only once, likely it will not help you.
2. When you know that the green pixels always exist in a group, so there cannot be an isolated single pixel. In that case you can skip one or more pixels and when you find a green pixel, analyze its neighbourhood.
Upvotes: 2 <issue_comment>username_2: I do not code on your platform but...
Well I assume your masked areas (with the specific color) are continuous and large enough ... that means you got groups of pixels together with big enough areas (not just few pixels thick stuff). With this assumption you can create a density map for your color. What I mean if min detail size of your specific color stuff is 10 pixels then you can inspect every 8th pixel in each axis speeding up the initial scan ~64 times. And then use the full scan only for regions containing your color. Here is what you have to do:
1. **determine properties**
You need to set the step for each axis (how many pixels you can skip without missing your colored zone). Let call this dx,dy.
2. **create density map**
simply create 2D array that will hold info if center pixel of region is set with your specific color. so if your image has `xs,ys` resolution than your map will be:
```
int mx=xs/dx;
int my=ys/dy;
int map[mx][my],x,y,xx,yy;
for (yy=0,y=dy>>1;y>1;x
```
3. **enlarge map set areas**
now you should enlarge the set areas in `map[][]` to neighboring cells because **#2** could miss edge of your color region.
4. **process all set regions**
```
for (yy=0;yy=threshold) pixel(x,y)=0x00000000;
```
If you want to speed up this even more than you need to detect set `map[][]` cells that are on edge (have at least one zero neighbor) you can distinquish the cells like:
```
0 - no specific color is present
1 - inside of color area
2 - edge of color area
```
That can be done by simply in `O(mx*my)`. After that you need to check for color only the edge regions so:
```
for (yy=0;yy=threshold) pixel(x,y)=0x00000000;
} else if (map[xx][yy]==0)
{
for (y=yy\*dy,y<(yy+1)\*dy;y++)
for (x=xx\*dx,x<(xx+1)\*dx;x++)
pixel(x,y)=0x00000000;
}
```
This should be even faster. In case your image resolution `xs,ys` is not a multiple of region size `mx,my` you should handle the outer edge of image either by zero padding or by special loops for that missing part of image...
btw how long it takes to read and set your whole image?
```
for (y=0;y
```
if this alone is slow than it means your pixel access is too slow and you should use different api for this. That is very common mistake on Windows **GDI** platform as people usually use `Pixels[][]` which is slower than crawling snail. there are other ways like bitlocking/blitting,ScanLine etc so in such case you need to look for something fast on your platform. If you are not able to speed even this stuff than you can not do anything else ... btw what HW is this run on?
Upvotes: 1 [selected_answer]
|
2018/03/17
| 793 | 3,319 |
<issue_start>username_0: I read many articles about CloudFront and Edge Locations.
I can't understand how the user's request is automatically routed to his nearest edge location ?
Can someone explains exactly what happens when a client requests a static resource ? what are the network components (ISP, AWS, etc.) are called ?
Regards.<issue_comment>username_1: When a browser makes a request for www.example.com, it first has to do a DNS lookup to establish the IP address of the server to send the HTTP request. Its at this point the DNS server decides which edge location IP address to route your request. The returned IP address is selected based on your IP address, so your always routed to a geographically local edge location.
When an edge location receives your HTTP request, it will either return a cached response (if available), otherwise it will make a request back to the origin, then return this response to the browser and optionally keep a copy (based on the headers sent from the origin/cloudfront config)
FYI technically this is not a redirection - that is a feature of HTTP, and can only happen after you have gotten the IP address of the server.
Upvotes: -1 <issue_comment>username_2: from AWS docs (<https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CloudFrontRegionaledgecaches.html?shortFooter=true>)
>
> When a viewer makes a request by your website or application, DNS
> routes the request to the CloudFront edge location that can best serve
> the user’s request. This location is typically the nearest CloudFront
> edge location in terms of latency
>
>
>
"latency" means the time it takes to get a request to and from the CloudFront edge location. It's not the same as "bandwidth", it's more like a reaction time
For details of how AWS monitors latency, see the Route53 documentation
If the edge location doesn't have a particular item that is requested then it will query a near neighbour...
>
> If the files are in the cache, CloudFront returns them to the user. If
> the files are not in the cache, the edge servers go to the nearest
> Regional Edge Cache to fetch the object. In the Regional Edge Cache
> location, CloudFront again checks its cache for the requested files.
>
>
>
If that doesn't work then the file is requested from the "origin server" which is the original web server or S3 bucket that contains the master copy of the site.
UPDATE: Just to add what @michael-sqlbot says below. Cloudfront access for the enduser happens via a "distribution domain name". This is a domain name in the form "abc123.mycloudfront.net". This name must be in Route53. It is possible to use a CNAME from your own DNS registered domains to indirect to this name
When the "distribution domain name" is queried in @michael-sqlbot's words
>
> the DNS query results in the browser receiving an answer including IP
> addresses that are selected in order to route the request to the
> CloudFront edge location that can best serve the user’s request.
>
>
>
To put it slightly differently, the same "distribution domain name" is used worldwide. But in different locations, this maps to different IP addresses. The IP addresses are "typically" the nearest by a latency measure but presumably, this is not absolutely guaranteed to be the case
Upvotes: 2
|
2018/03/17
| 802 | 2,647 |
<issue_start>username_0: I just installed Python3.7 and Pycharm on my Windows 10 PC.
I am running pip version 9.0.2
In Pycharm, it says I have version 28.8.0 of setuptools, when I try to upgrade this in Pycharm, which I believe runs the line
```
pip install -U setuptools
```
I get the error:
>
> PermissionError: [WinError 32] The process cannot access the file because it
> is being used by another process:
> 'c:\users\Username\pycharmprojects\untitled1\venv\lib\site-
> packages\setuptools-28.8.0-py3.6.egg' ->
> 'C:\Users\Username\AppData\Local\Temp\pip-i5jxitem-
> uninstall\users\Username\pycharmprojects\untitled1\venv\lib\site-
> packages\setuptools-28.8.0-py3.6.egg'
>
>
>
I have tried running
```
pip install --upgrade setuptools
```
which runs successfully and says I have the latest version.
Does anyone know how I can successfully update setuptools?
The reason I'd like setuptools to be up to date, is so I can then get rid of the egg\_info error installing other packages.<issue_comment>username_1: I have the same error. Not sure why it happened. But I managed to upgrade by running:
`pip install setuptools --upgrade --ignore-installed`
Upvotes: 6 <issue_comment>username_2: You can also try:
```
easy_install --upgrade setuptools
```
Even though easy\_install's deprecated, there's a good chance it'll still work on Windows.
If you try to use pip to upgrade setuptools:
```
pip install -U setuptools
```
it seems to get stuck on an error about 10.0 vs 18.0 "You should consider upgrading via the 'python -m pip install --upgrade pip' command."
The official pip doco(<https://pip.pypa.io/en/stable/installing/#upgrading-pip>) says on Windows to use this:
```
python -m pip install --upgrade pip
```
However, it seemed pip and setuptools had a hiccup with a circular dependency or lock around pip-v9.0 or 10.0 to pip-18.0 and setuptools v28.0 to v39.0 or 40.0 so persevere with this:
```
python -m pip install --force-reinstall pip
```
You need setuptools >= 39.0 if you want to install pipenv also.
Upvotes: 2 <issue_comment>username_3: Try the following command to upgrade the setup tools in windows
```
pip install -U pip setuptools
```
or
```
pip install setuptools --upgrade --ignore-installed
```
Upvotes: 0 <issue_comment>username_4: I solved more or less similar problem as follow:
In my `PyCharm` project terminal tab:
```
`(venv) PS C:/path/to/project>`
```
updated `pip, setup tools and wheels` with command:
```
py -m pip install --upgrade pip setuptools wheel
```
([source of recommendation](https://packaging.python.org/en/latest/tutorials/installing-packages/))
Upvotes: 0
|
2018/03/17
| 813 | 2,704 |
<issue_start>username_0: I have a table that contains articles. Some articles are part of a parent article.
This is my model:
```
article_id | parent_article_id | title
--------------------------------------
61 | 0 | ...
62 | 61 | ...
43 | 61 | ...
48 | 61 | ...
```
**EXAMPLE 1:**
If I select a parent article (let's say `article_id` 61) i want to return `article_id` 62, 43 and 48. In other words, find all of the child articles.
I was able to create a successful query:
```
SELECT a.article_id, a.title
FROM article a
WHERE a.parent_article_id = 61
```
**EXAMPLE 2:**
This is where I get stuck:
I kind of want to do the reverse. For example, if i select a child article (let's say `article_id` 43), i want to return `article_id` 61, 62, 48. In other words, if i select a child article, then return the parent article (61) and all of the children (62, 48). i don't need to return the child `article_id` 43..
How do i create 1 query for both examples ?<issue_comment>username_1: I have the same error. Not sure why it happened. But I managed to upgrade by running:
`pip install setuptools --upgrade --ignore-installed`
Upvotes: 6 <issue_comment>username_2: You can also try:
```
easy_install --upgrade setuptools
```
Even though easy\_install's deprecated, there's a good chance it'll still work on Windows.
If you try to use pip to upgrade setuptools:
```
pip install -U setuptools
```
it seems to get stuck on an error about 10.0 vs 18.0 "You should consider upgrading via the 'python -m pip install --upgrade pip' command."
The official pip doco(<https://pip.pypa.io/en/stable/installing/#upgrading-pip>) says on Windows to use this:
```
python -m pip install --upgrade pip
```
However, it seemed pip and setuptools had a hiccup with a circular dependency or lock around pip-v9.0 or 10.0 to pip-18.0 and setuptools v28.0 to v39.0 or 40.0 so persevere with this:
```
python -m pip install --force-reinstall pip
```
You need setuptools >= 39.0 if you want to install pipenv also.
Upvotes: 2 <issue_comment>username_3: Try the following command to upgrade the setup tools in windows
```
pip install -U pip setuptools
```
or
```
pip install setuptools --upgrade --ignore-installed
```
Upvotes: 0 <issue_comment>username_4: I solved more or less similar problem as follow:
In my `PyCharm` project terminal tab:
```
`(venv) PS C:/path/to/project>`
```
updated `pip, setup tools and wheels` with command:
```
py -m pip install --upgrade pip setuptools wheel
```
([source of recommendation](https://packaging.python.org/en/latest/tutorials/installing-packages/))
Upvotes: 0
|
2018/03/17
| 1,656 | 4,253 |
<issue_start>username_0: I am getting wrong value in @timestamp field for elasticsearch/filebeat.
My filebeat pipeline definition
```
curl -H 'Content-Type: application/json' -XPUT "logger:9200/_ingest/pipeline/app_log" -d'
{
"description" : "Ingest pipeline for Jetty server log",
"processors" : [
{
"grok": {
"field": "message",
"patterns": ["%{TIMESTAMP_ISO8601:timestamp} (%{UUID:accessid})? \\[(?[^\\]]+)\\] %{LOGLEVEL:level} %{DATA:classname} - %{GREEDYDATA:message}"]
}
},
{
"date": {
"field": "timestamp",
"formats": [ "yyyy-mm-dd H:m:s,SSS" ]
}
}
],
"on\_failure" : [{
"set" : {
"field" : "error.message",
"value" : "{{ \_ingest.on\_failure\_message }}"
}
}]
}'
```
Simulation result for some sample logger line. ( Using logback configuration )
```
curl -H 'Content-Type: application/json' -XPOST "logger:9200/_ingest/pipeline/app_log/_simulate?pretty" -d'
{
"docs": [
{
"_source": {
"message": "2018-03-17 22:38:39,079 bab3157d-a11c-4dba-a6d6-c47ae0de2b7f [qtp224100622-174782] INFO i.n.core.services.cache.CacheBuilder - Key : ChIJrTTTJkdsrjsRXkrYRKRRfd8-seo-localitiesv1 is returned from cache"
}
},
{
"_source": {
"message": "2017-12-12 01:14:12,079 [qtp224100622-185269] WARN i.n.m.cache.sdk.RedisCacheProvider - No matching policy: class in.nobroker.core.domain.Token"
}
}
]
}'
```
The result from this simulation:
>
> { "docs" : [
> {
> "doc" : {
> "\_index" : "\_index",
> "\_type" : "\_type",
> "\_id" : "\_id",
> "\_source" : {
> "accessid" : "bab3157d-a11c-4dba-a6d6-c47ae0de2b7f",
> "@timestamp" : "2018-01-17T22:38:39.079Z",
> "classname" : " i.n.core.services.cache.CacheBuilder",
> "level" : "INFO",
> "message" : "Key : <KEY> is returned from cache",
> "timestamp" : "2018-03-17 22:38:39,079",
> "threadname" : "qtp224100622-174782"
> },
> "\_ingest" : {
> "timestamp" : "2018-03-17T15:35:35.543Z"
> }
> }
> },
> {
> "doc" : {
> "\_index" : "\_index",
> "\_type" : "\_type",
> "\_id" : "\_id",
> "\_source" : {
> "@timestamp" : "2017-01-12T01:14:12.079Z",
> "classname" : " i.n.m.cache.sdk.RedisCacheProvider",
> "level" : "WARN",
> "message" : "No matching policy: class in.nobroker.core.domain.Token",
> "timestamp" : "2017-12-12 01:14:12,079",
> "threadname" : "qtp224100622-185269"
> },
> "\_ingest" : {
> "timestamp" : "2018-03-17T15:35:35.543Z"
> }
> }
> } ] }
>
>
>
Please notice that @timestamp field is totally different from timestamp field.<issue_comment>username_1: I have the same error. Not sure why it happened. But I managed to upgrade by running:
`pip install setuptools --upgrade --ignore-installed`
Upvotes: 6 <issue_comment>username_2: You can also try:
```
easy_install --upgrade setuptools
```
Even though easy\_install's deprecated, there's a good chance it'll still work on Windows.
If you try to use pip to upgrade setuptools:
```
pip install -U setuptools
```
it seems to get stuck on an error about 10.0 vs 18.0 "You should consider upgrading via the 'python -m pip install --upgrade pip' command."
The official pip doco(<https://pip.pypa.io/en/stable/installing/#upgrading-pip>) says on Windows to use this:
```
python -m pip install --upgrade pip
```
However, it seemed pip and setuptools had a hiccup with a circular dependency or lock around pip-v9.0 or 10.0 to pip-18.0 and setuptools v28.0 to v39.0 or 40.0 so persevere with this:
```
python -m pip install --force-reinstall pip
```
You need setuptools >= 39.0 if you want to install pipenv also.
Upvotes: 2 <issue_comment>username_3: Try the following command to upgrade the setup tools in windows
```
pip install -U pip setuptools
```
or
```
pip install setuptools --upgrade --ignore-installed
```
Upvotes: 0 <issue_comment>username_4: I solved more or less similar problem as follow:
In my `PyCharm` project terminal tab:
```
`(venv) PS C:/path/to/project>`
```
updated `pip, setup tools and wheels` with command:
```
py -m pip install --upgrade pip setuptools wheel
```
([source of recommendation](https://packaging.python.org/en/latest/tutorials/installing-packages/))
Upvotes: 0
|
2018/03/17
| 945 | 3,434 |
<issue_start>username_0: I have a form. My objective is send and insert the values of the form to my database. Then
1. Clear the input of the form
2. Show the successful message.
Like this:
[enter image description here](https://i.stack.imgur.com/jGOUE.png)
-------------------------------------------------------------------
[](https://i.stack.imgur.com/zgIdP.png)
My problems:When I press the "save" button, I am redirected to another page.
This is the operating error:
[](https://i.stack.imgur.com/F6k00.png)
and change the page
[](https://i.stack.imgur.com/3940c.png)
¿What is the problem on my code?
html
```
Insert Data Into MySQL: jQuery + AJAX + PHP
Name:
Age :
Save
```
This is my script
```
$("#myForm").submit( function(e) {
// Prevent the normal form submission event
e.preventDefault();
// Create an object of the form
var form = $(this);
// Make an AJAX request
$.post(this.action, form.serializeArray(), function(info) {
// Clear the form
form[0].reset();
// Display message
$("#result").html(info);
});
```
});
Finally, my code to insert
```
php
$conn = mysql_connect('localhost', 'root', '');
$db = mysql_select_db('practicas');
$name = $_POST['name'];
$age = $_POST['age'];
if(mysql_query("INSERT INTO ajaxtabla VALUES('$name', '$age')"))
echo "Successfully Inserted";
else
echo "Insertion Failed";
?
```
Thanks for help me<issue_comment>username_1: You can simplify the code by having just one event handler for when you submit the form. Additionally, you need to include `e.preventDefault()` which prevents the normal function of the submit button.
```
$("#myForm").submit( function(e) {
// Prevent the normal form submission event
e.preventDefault();
// Create an object of the form
var form = $(this);
// Make an AJAX request
$.post(this.action, form.serializeArray(), function(info) {
// Clear the form
form[0].reset();
// Display message
$("#result").html(info);
});
});
```
Give your button a `type` attribute as well to ensure your form submits upon clicking.
```
Save
```
**Demo**
```js
$(function() {
$("#myForm").submit(function(e) {
// Prevent the normal form submission event
e.preventDefault();
console.log('onsubmit event handler triggered');
// Create an object of the form
var form = $(this);
// Make an AJAX request
/*$.post(this.action, form.serializeArray(), function(info) {
// Clear the form
form[0].reset();
// Display message
$("#result").html(info);
});*/
// fake AJAX request
setTimeout(function(){
// Clear the form
form[0].reset();
// Display message
$("#result").html('Successfully Inserted');
}, 3000);
});
});
```
```html
Name:
Age :
Save
```
Upvotes: 0 <issue_comment>username_2: In ajax you are posting the form with its action so it will be redirected confirm
If you don't want to redirect then send ajax request like this
```
$.ajax({
method:post,
url:"File name you want to send request"
}).done(function(response){
});
```
Upvotes: -1
|
2018/03/17
| 744 | 2,972 |
<issue_start>username_0: We are building a cross platform app, and a decision maker on my team (who is an Apple user) wants all of our "Select to delete" checkboxes to be round, "Like Apple's Mail and Messages apps".
[](https://i.stack.imgur.com/KCjtr.jpg)
[](https://i.stack.imgur.com/oaI0n.png)
I personally find this confusing, I think they look like radio buttons. He disgarees that anyone would confuse round circles for radio buttons.
Apple's HIG about checkboxes doesn't specifically state that checkboxes should be a specific shape [Apple's HIG on Checkboxes](https://developer.apple.com/macos/human-interface-guidelines/buttons/checkboxes/)
Apple's Sample Code shows square checkboxes, but that's not selecting to DELETE an item [Apple's TableViewCell Sample Code](https://developer.apple.com/library/content/samplecode/Accessory/Introduction/Intro.html)
**Have all iOS devices always used round checkmarks to delete items, or is this a new change?**
Where and when does iOS use round checkmarks?<issue_comment>username_1: Round buttons are the default and standard in multiple selection of iOS table views, and always have been, as far as I know. You can see this in the second screenshot of the ["Buttons" page in Apple's iOS Human Interface Guidelines](https://developer.apple.com/ios/human-interface-guidelines/controls/buttons/). Try this Swift sample code that invokes `UITableView`'s default multiple selection mode:
```
import UIKit
class ViewController: UITableViewController {
override func viewDidLoad() {
super.viewDidLoad()
tableView.allowsMultipleSelectionDuringEditing = true
tableView.setEditing(true, animated: false)
}
override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell{
let cell = tableView.dequeueReusableCell(withIdentifier: "Cell")
cell?.textLabel?.text = String(indexPath.row)
return cell!
}
override func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
return 15
}
}
```
The above code produces this interface (taken on the iPhone X iOS 11.3 Simulator):
[](https://i.stack.imgur.com/1PV43.png)
Upvotes: 3 [selected_answer]<issue_comment>username_2: A common case for IOS to use round check marks in their design is when they are designing for color impaired users. Since these users can't rely on the colors green and red to provide them information, these check marks are used instead. Instead of green a check mark is used, instead of red use an X is used.
If you are just selecting items to delete I would consider using numerical values in the round icon to identify how many messages the user is selecting.
Upvotes: 0
|
2018/03/17
| 525 | 1,525 |
<issue_start>username_0: i have this error when run repair?
```
myisamchk: error: 140 when opening MyISAM-table '/var/lib/mysql/zibarsho_karno/wp_yoast_seo_links.MYI'
```
how i can fix this ???<issue_comment>username_1: This is a bug already reported since MySQL 5.6
Still happening in 8.0.11 so in the mean time you can use the walkarround solution.
Not using the MYI extension.
```
myisamchk --force --update-state var/lib/mysql/zibarsho_karno/wp_yoast_seo_links
```
Upvotes: 3 <issue_comment>username_2: As user [username_1](https://stackoverflow.com/users/5446162/jesus-uzcanga) already mentioned, it is an old [bug](https://bugs.mysql.com/bug.php?id=87729) which hasn't been fixed yet [current version is 8.0.15].
These commands are a workaround when you run it directly in the directory where the `.MYI` files are located:
```
ls *.MYI | xargs basename -s .MYI | xargs myisamchk
```
It removes the extension and runs `myisamchk` for each MyISAM data file.
Upvotes: 3 <issue_comment>username_3: ```
ls *.MYI | xargs basename -s .MYI | xargs -I{} myisamchk -r --force {}
```
Upvotes: 0 <issue_comment>username_4: ```
ls *.MYI | sed 's/\.[^.]*$//' | xargs myisamchk -F -U
```
Saved me here because of basename extra operand and other issue. Please note ***-F -U*** are for ***Fast*** and ***UPDATE STATUS*** flag. You can use without it.
Upvotes: 4 <issue_comment>username_5: try use this :
```
find /var/lib/mysql/*/* -name '*.MYI' | sed -e 's/\.MYI$//' | xargs -I{} myisamchk -r -f -o {}
```
Upvotes: 2
|
2018/03/17
| 905 | 3,022 |
<issue_start>username_0: This example should fire alert when I add text in the text area, for some reason it does so when I use `$('#sentid1').onkeypress`, and it only works when I use `$('#sentid1').onkeypress`.
html part:
```
This is sentence 1. This is sentence 2.
```
Javascript/jQuery part:
```
if (1==1) {
$('#sentid1').onkeypress = function(event) {
alert("theid: " + this.id);
};
} else{
document.onkeypress = function(event) {
alert("theid: " + this.id);
};
}
```<issue_comment>username_1: Try this.
For jquery selector $('#sentid1') use the jquery keypress
```js
if (1==1) {
$('#sentid1').keypress(function() {
alert("theid: " + this.id);
});
} else {
document.keypress(function() {
alert("theid: " + this.id);
});
}
```
Upvotes: 0 <issue_comment>username_2: ```
if (1==1) {
$('#sentid1').Keypress(function(event) {
alert("theid: " + this.id);
});
} else{
document.Keypress(function(event) {
alert("theid: " + this.id);
});
}
```
Upvotes: 0 <issue_comment>username_3: not clear about your approach if you want a alert writing something on textarea you can use key up
```
if (1==1) {
$('#sentid1').keyup(function(event) {
alert("theid: " + this.id);
});
} else{
document.keyup(function(event) {
alert("theid: " + this.id);
});
}
```
Upvotes: 0 <issue_comment>username_4: You should attach the event handler like this
```
$(...).keypress(function(event) {...} )
```
or
```
$(...).on('keypress', function(event) {...} )
```
And it needs to be attached to the `#textarea`, since that is the element that can take an input, which the `span` cannot.
```js
$('#textarea').keypress(function(event) {
alert("theid: " + this.id);
});
```
```html
This is sentence 1. This is sentence 2.
```
---
Updated based on a comment
If to get the `span`, do like this, where you use the `event.target.id` to get which `span`
```js
$('#textarea').keypress(function(event) {
alert("theid: " + event.target.id);
});
```
```css
#textarea {
position: relative;
}
span[contenteditable] {
outline: none;
}
span[contenteditable]:focus::before {
content: '';
position: absolute;
left: 0; top: 0; right: 0; bottom: 0;
outline: 2px solid lightblue;
}
```
```html
This is sentence 1. This is sentence 2.
```
Upvotes: 1 <issue_comment>username_5: How are you testing that keypress event? In order to trigger keypress event you need your element to be either **focused** or be a parent of some other element on which keypress is triggered, so keypress is **bubbled**.
You're trying to catch keypress on **span**, by default span is not focusable, let's make it with **tabindex** for example
```
This is sentence 1.
```
Now you can actually **click** on this span, press some key and see the expected result
Upvotes: 3 [selected_answer]
|
2018/03/17
| 509 | 1,529 |
<issue_start>username_0: In Bootstrap 4, I have three divs:
```
A
B
C
```
Is there a way, **without placing div B and C inside another div**, to achieve this:
```
A B
C
```<issue_comment>username_1: What you can do is assign a class to A and put a float-left in css, then its just a matter of margins etc.
```
A
B
C
```
Css code
```
.left{
float:left;
}
```
Upvotes: 0 <issue_comment>username_2: >
> Is there a way, **without placing div B and C inside another div**, to achieve this:
>
>
>
```
A B
C
```
Yes, you can achieve this by using offset classes like so:
```html
A
B
C
```
Note: The column classes and the offset class are not responsive in the example shown above. To make them responsive, you need to add the appropriate infixes.
Reference:
<https://getbootstrap.com/docs/4.0/layout/grid/#offsetting-columns>
<https://getbootstrap.com/docs/4.0/layout/grid/#responsive-classes>
>
> Now say div A is 600px heigh, B and C are both 50px heigh. I don't want A to push C down. I want C to go right below B. What now?
>
>
>
In this case, you'd want to use the `d-block clearfix` and `float-left` + `float-right` classes like so:
```html
A
B
C
```
In this case, `d-block` switches from the default flexbox to "display:block" for the row and the `float-left`/`float-right` classes tell the columns where to float.
Reference:
<http://getbootstrap.com/docs/4.0/utilities/float/>
<https://getbootstrap.com/docs/4.0/utilities/clearfix/>
Upvotes: 2 [selected_answer]
|
2018/03/17
| 681 | 2,520 |
<issue_start>username_0: i try to find a recursive function that take two string and returns true if string1 is a substring of string2 and false otherwise by check recursively if str2 starts with str1.
i try this code but it does not work i do not know why !
```
def main():
s1 = input("enter string")
s2 = input("enter steing")
sub = subs(s1,s2)
print(sub)
def subs(s1, s2):
if s2 == 0:
return True
else:
if s2.startswith((s1)):
subs(s1, s2[1:])
return True
main()
```
thank you !<issue_comment>username_1: Your program has a few syntactic issues first:
1. You're invoking the `subs` function before you're actually defining it.
2. Your indent of main() is off.
3. `subs` is an inner function of `main`. While not technically incorrect, using inner functions can make your code more difficult to read.
You also have some semantic issues:
1. `subs` is only returning true on the escape condition, there is no default return value. This is poor practice
2. `subs` is checking if `s2 == 0`, which it will never be - strings won't equal 0. Empty strings are Falsy, so you can check if the string is empty by using `if not s2:` instead
3. Your logic dictates that the recursion will ONLY occur if `s2` starts with `s1`, meaning if s1='ab' and s2='habc', it will not trigger recursion, even though s1 is a substring of s2.
My suggestion would be to reexamine your algorithm. Begin with identifying how you will begin looking for a candidate substring first, then what your escape conditions are, then how you break the problem into a smaller problem.
Upvotes: 0 <issue_comment>username_2: I rewrote the code you had written, but kept the intuition same. Major problems were:
1.Indentation mistakes at many places.
2.function definition after calling it.
3.Wrong method to find empty string.
```
def main():
def subs(s1, s2,k):
#If input string is empty .
if k==0 and not s2.strip():
print("Yes")
return 0
#String 1 not empty and search for the sub-string.
elif s1.strip() and s1.startswith(s2):
print("Yes")
return 0
#String 1 not empty and sub-string not matched.
elif s1.strip():
subs(s1[1:],s2,1)
#String empty and sub-string not match.
else :
print("No")
s1 = input("enter string")
s2 = input("enter sub-string")
sub = subs(s1,s2,0)
#print(sub)
main()
```
Upvotes: -1
|
2018/03/17
| 1,204 | 3,280 |
<issue_start>username_0: I have a list containing string elements:
```
movielst = ['A WALK,DRAGONBALL', 'JAMES BOND,MISSION']
```
and another list that contains integer values:
```
userlst = [[1,20],[6,7]]
```
I'm planning to print the output based off both list where the first element in movielst corresponds to the first list in userlst and so on.
Output to get:
```
Movies: A WALK,DRAGONBALL
Users: 1,20
Movies: JAMES BOND,MISSION
Users: 6,7
```
I wrote
```
for j in range(len(userlst)-1):
for i in movielst:
print("Movies: " + str(i))
print("Users: " + str(userlst[j]))
```
But I'm getting:
```
Movies: A WALK,DRAGONBALL
Users: 1,20
Movies: JAMES BOND,MISSION
Users: 1,20 #should be 6,7
```
How do i print the output based off both lists in parallel?<issue_comment>username_1: If you know both lists are the same length:
```
for i in range(len(movielst)):
print("Movies: {}".format(str(movielst[i]))
print("Users: {}".format(str(userlst[i])))
print(' ')
```
This way you loop over both loops and print the same index of both lists in the same loop.
The .format works as following:
```
print('text {} text {}'.format(, ))
```
outputs:
```
text text
```
Upvotes: 0 <issue_comment>username_2: You can use `zip`
**Ex:**
```
movielst = ['A WALK,DRAGONBALL', 'JAMES BOND,MISSION']
userlst = [[1,20],[6,7]]
for i in zip(movielst, userlst):
print("Movies: {}".format(i[0]))
print("Users: {}".format(", ".join(map(str, i[1]))))
```
**Output:**
```
Movies: A WALK,DRAGONBALL
Users: 1, 20
Movies: JAMES BOND,MISSION
Users: 6, 7
```
**Note:**
1. I have used `map` to convert int to string for userlst
2. `join` to concat the element in userlst to your required format.
Upvotes: 1 <issue_comment>username_3: Use [zip](https://docs.python.org/2/library/functions.html#zip), [join](https://docs.python.org/2/library/stdtypes.html#str.join), and [format](https://docs.python.org/2/library/string.html#formatstrings) with a comprehension:
```
>>> print '\n'.join("Movies: {}\nUsers: {}".format(x,y) for x,y in zip(movielst,userlst))
Movies: A WALK,DRAGONBALL
Users: [1, 20]
Movies: JAMES BOND,MISSION
Users: [6, 7]
```
Or, as stated in comments and if you want double spaced:
```
>>> print '\n\n'.join("Movies: {}\nUsers: {}".format(*z) for z in zip(movielst,userlst))
Movies: A WALK,DRAGONBALL
Users: [1, 20]
Movies: JAMES BOND,MISSION
Users: [6, 7]
```
Upvotes: 1 <issue_comment>username_4: The j in the first loop will not increase until 'i' traverse through every element, so j will print 1.20 both times. For traversing through both lists at the same time you can use one for loop.
```
for j in range(len(userlst)):
print("Movies: " + str(movielst[j]))
print("Users: " + str(userlst[j]))
```
The output will be :
```
Movies: A WALK,DRAGONBALL
Users: [1, 20]
Movies: JAMES BOND,MISSION
Users: [6, 7]
```
Upvotes: 0 <issue_comment>username_5: You can try this:
```
movielst = ['A WALK,DRAGONBALL', 'JAMES BOND,MISSION']
userlst = [[1,20],[6,7]]
for i in zip(movielst,userlst):
print("Movies : {}".format(i[0]))
print("Users : {} {}".format(*i[1]))
```
output:
```
Movies : A WALK,DRAGONBALL
Users : 1 20
Movies : JAMES BOND,MISSION
Users : 6 7
```
Upvotes: 0
|
2018/03/17
| 614 | 2,333 |
<issue_start>username_0: After recent deploy I found many errors like this on my production server :
>
> Last write attempt timed out; force-closing the connection.
>
>
>
At the moment I'm getting ~5 such errors per hour from different hosts.
I looked through the code in Netty `SslHandler` and see that this exception is expected when `close_notify` event is not fully sent to the receiver (according to the comments).
However, I never saw this error before. That makes me think that it may be some kind of wrong pipeline configuration issue. So at the moment, I'm not sure if that my error or just the network issue.
Netty version 4.1.22.Final
Epoll and ssl enabled with netty-tcnative-boringssl-static 2.0.7.Final
I made few unit tests but without any luck.
Does anybody have any clues?<issue_comment>username_1: There is not much you can do except higher the timeout. This means you could not write out the data fast enough to the remote peer, which most likely means the remote peer did not read fast enough. Another cause could be that you block the EventLoop and so it can not make any progress in a timely manner.
Upvotes: 1 <issue_comment>username_2: After some investigation, I was able to fix the issue. Seems that problem was in the pipeline configuration.
Pipeline with bug:
```
.addLast(new IdleStateHandler(appIdleTimeout, 0, 0))
.addLast(sslCtx.newHandler(ch.alloc()))
.addLast(appChannelStateHandler);
```
Pipeline without the issue:
```
.addLast(new IdleStateHandler(appIdleTimeout, 0, 0))
.addLast(appChannelStateHandler)
.addLast(sslCtx.newHandler(ch.alloc()))
```
Where `appChannelStateHandler` is `ChannelInboundHandlerAdapter`:
```
@Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
if (evt instanceof IdleStateEvent) {
log.trace("State handler. App timeout disconnect. Event : {}. Closing.", ((IdleStateEvent) evt).state());
ctx.close();
} else {
ctx.fireUserEventTriggered(evt);
}
}
```
Probably the issue was that `ctx.close();` was called both in `SslHandler` and in `appChannelStateHandler`. At least this is something that comes to mind first. I still not sure - is that netty issue or wrong pipeline configuration? I wasn't able to reproduce the problem locally anyway.
Upvotes: 1 [selected_answer]
|
2018/03/17
| 586 | 2,217 |
<issue_start>username_0: Given the example that you are connecting to a mongoDB in js.
```
mongoClient.connect("mongodb:/localhost/db",
function (err, db){
if (err){
throw err
}
} // some following code
```
What is the correct to catch errors in an express application so the program carries on running? Currently I have it set up as an if/else. Which does do the trick, it keeps the program running. But it seems a bad way to do it.
```
mongoClient.connect("mongodb:/localhost/db",
function (err, db){
if (err){
resp.json(err)
console.log(err)
} else {
// some following code
}
}
```
Is there a better way to catch errors?<issue_comment>username_1: There is not much you can do except higher the timeout. This means you could not write out the data fast enough to the remote peer, which most likely means the remote peer did not read fast enough. Another cause could be that you block the EventLoop and so it can not make any progress in a timely manner.
Upvotes: 1 <issue_comment>username_2: After some investigation, I was able to fix the issue. Seems that problem was in the pipeline configuration.
Pipeline with bug:
```
.addLast(new IdleStateHandler(appIdleTimeout, 0, 0))
.addLast(sslCtx.newHandler(ch.alloc()))
.addLast(appChannelStateHandler);
```
Pipeline without the issue:
```
.addLast(new IdleStateHandler(appIdleTimeout, 0, 0))
.addLast(appChannelStateHandler)
.addLast(sslCtx.newHandler(ch.alloc()))
```
Where `appChannelStateHandler` is `ChannelInboundHandlerAdapter`:
```
@Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
if (evt instanceof IdleStateEvent) {
log.trace("State handler. App timeout disconnect. Event : {}. Closing.", ((IdleStateEvent) evt).state());
ctx.close();
} else {
ctx.fireUserEventTriggered(evt);
}
}
```
Probably the issue was that `ctx.close();` was called both in `SslHandler` and in `appChannelStateHandler`. At least this is something that comes to mind first. I still not sure - is that netty issue or wrong pipeline configuration? I wasn't able to reproduce the problem locally anyway.
Upvotes: 1 [selected_answer]
|
2018/03/17
| 347 | 1,342 |
<issue_start>username_0: I have a object from the following class,
```
public class Customer {
private String email;
private String name;
}
```
How can I check whether every attribute is not null using Optional from java 8? Or there is another less verbose way?<issue_comment>username_1: With out reflection you can't check all in one shot (*as others mentioned, Optional not meant for that purpose*). But if you are okay to pass all attributes
```
boolean match = Stream.of(email, name).allMatch(Objects::isNull);
```
You can have it as an util method inside your Class and use it.
Upvotes: 3 [selected_answer]<issue_comment>username_2: >
> How can I check whether every attribute is not null using Optional
> from java 8?
>
>
>
[Optional](https://docs.oracle.com/javase/9/docs/api/java/util/Optional.html) is NOT meant for checking nulls. Rather it is intended to act as a container on your objects / fields and provide a null safe reference. The idea is to relieve programmer from checking null references for every field in a sequence of operations covering multiple fields. What you are asking for is exactly opposite of what Optional is supposed to help with.
[Here](http://www.oracle.com/technetwork/articles/java/java8-optional-2175753.html) is a good article explaining the purpose of `Optional`.
Upvotes: 1
|
2018/03/17
| 341 | 1,279 |
<issue_start>username_0: I use : jupyter notebook file\_name.ipynb.
But how to open file with spaces in it's name. For example :
filename is : " ABC XYZ .ipynb"
jupyter notebook ABC XYZ .ipynb , don't work.<issue_comment>username_1: With out reflection you can't check all in one shot (*as others mentioned, Optional not meant for that purpose*). But if you are okay to pass all attributes
```
boolean match = Stream.of(email, name).allMatch(Objects::isNull);
```
You can have it as an util method inside your Class and use it.
Upvotes: 3 [selected_answer]<issue_comment>username_2: >
> How can I check whether every attribute is not null using Optional
> from java 8?
>
>
>
[Optional](https://docs.oracle.com/javase/9/docs/api/java/util/Optional.html) is NOT meant for checking nulls. Rather it is intended to act as a container on your objects / fields and provide a null safe reference. The idea is to relieve programmer from checking null references for every field in a sequence of operations covering multiple fields. What you are asking for is exactly opposite of what Optional is supposed to help with.
[Here](http://www.oracle.com/technetwork/articles/java/java8-optional-2175753.html) is a good article explaining the purpose of `Optional`.
Upvotes: 1
|
2018/03/17
| 476 | 1,811 |
<issue_start>username_0: I decided use framework CodeceptJS and library Nightmare.
Mine issue is set cookie before run all test suite
I read the documentation and understanding so for that to solve mine issue me need use helper classes. Maybe I'm wrong but still.
Perhaps you need to use a different approach if so let me know.
It mine helper
```
'use strict';
class SetCookie extends Helper {
constructor(config){
super(config)
}
_beforeSuite() {
this.client = this.helpers['Nightmare'].browser;
this.getCookies().then((cookies) => {
console.log(cookies);
})
}
getCookies(){
return this.client.cookies.get()
}
}
module.exports = SetCookie;
```
**Problem**
Cookies return after finished test suite<issue_comment>username_1: With out reflection you can't check all in one shot (*as others mentioned, Optional not meant for that purpose*). But if you are okay to pass all attributes
```
boolean match = Stream.of(email, name).allMatch(Objects::isNull);
```
You can have it as an util method inside your Class and use it.
Upvotes: 3 [selected_answer]<issue_comment>username_2: >
> How can I check whether every attribute is not null using Optional
> from java 8?
>
>
>
[Optional](https://docs.oracle.com/javase/9/docs/api/java/util/Optional.html) is NOT meant for checking nulls. Rather it is intended to act as a container on your objects / fields and provide a null safe reference. The idea is to relieve programmer from checking null references for every field in a sequence of operations covering multiple fields. What you are asking for is exactly opposite of what Optional is supposed to help with.
[Here](http://www.oracle.com/technetwork/articles/java/java8-optional-2175753.html) is a good article explaining the purpose of `Optional`.
Upvotes: 1
|
2018/03/17
| 994 | 3,825 |
<issue_start>username_0: I imported a generated 3d model of a MRT scan into Unity. Unfortunately the model is imported into a lot of slices which make up the whole model.
The hierachy is like this:
[](https://i.stack.imgur.com/NIJvZ.png)
As you can imagine it takes a lot of ressources and slows down further processes. So I want to combine the children (default\_MeshPart0, default\_MeshPart1, ...) of "default" into one mesh. I found out about [Mesh Combine Wizard](https://forum.unity.com/threads/mesh-combine-wizard-free-unity-tool-source-code.444483/) which works perfectly for objects with only a few children but it doesn't work with a larger number of meshes as children. I tried to move some children into another parent folder and combine them seperatly to combine the new parent folders later on but this doesn't work either as I got a random looking mesh.
Do you have an idea how to fix this or do you recommend me another process?<issue_comment>username_1: It shouldn't be problem if its created for Unity3D. Anyway If you want single mesh, you will need a 3D modeling tool to combine it then export in FBX format according to unity guideline as explained [here](https://docs.unity3d.com/Manual/HOWTO-exportFBX.html). I hope it helps you.
Upvotes: 0 <issue_comment>username_2: Combining the meshes is actually pretty straightforward (I'll run you through in a minute), there is a gotcha however - while its easy to do in runtime, I've had some problems trying to reliably save the resulting mesh as an hard file asset to disk. I dit get there in the end but it was trickier than it seemed. You don't need to save your asset if you are okay in this being done in Start(), and you can do it in edit mode, but the resulting mesh will not be automatically saved with the scene - just something to have in mind.
Back to geometry - geometry consists of three arrays
```
Vector3[] verts;
Vector2[] uv;
int[] triangles;
```
The only thing to wrap your head around is that if you have n vertexes, uv[n] and vector3[n] describes n-th vertex (obviously), the triangles array is 3\*n, and contains indexes refering to the vertex table, it goes in sequence, so if you want to find triangle k you read
```
x=triangles[k*3]
y=triangles[k*3+1]
z=triangles[k*3+2]
```
and than you have your triangle defined by three vectors verts[x],verts[y],verts[z].
There are also submeshes, used when a mesh contains multiple materials, but its not complicated once you know how the above structure is laid out.
Co create a new mesh, firs to GetComponentsInChildren, than grab a list of all the meshes, count the total vertex count j.
Create a new Mesh object, initialise it with tables verts = new int[j], tris = new int[j\*3] and than just copy all the vertex data. Once you have the mesh, create a new object with a mesh filter and mesh renderer, feed the mesh with it and you're done. Problem is this will dissapear on play mode state unless you save the asset but that's a seperate question
if you have submeshes you need to list all the materials that are present and group vertieces by material used (sub group = a grup using a single material), aside from managing another list this isn't much harder (and nothing terrible happens if you ignore that step, you'll just get a single solid object) which is great for performance in most cases as your object will be a single drawcall not depending on dynamic batching)
Upvotes: 1 <issue_comment>username_3: Perhaps you need to do some `post-scan process`.
Try:
* Opening the scan model in a `3d software` like Maya, 3D Max or Blender.
* `Attach` or `Merge` all the parts together.
* Export it as `.fbx`.
* Import it into Unity.
* In Unity will be a `single gameObject`.
Upvotes: 2 [selected_answer]
|
2018/03/17
| 741 | 2,248 |
<issue_start>username_0: My question is relatively simple. Is 'str' easier than 'int' in the following examples?
```
slot_1 = 'med kit'
if slot_1 == 'med kit':
print('med kit ready')
```
is that faster to run and better than
```
slot_1 = 0
#0 represents med kit
if slot_1 == 0:
print('med kit ready')
```
Or is the one using integers more pythonic, and runs better?
What if I were to have multiple things, not just med kit. Would it still be better to use strings? If you want any more information I can edit.<issue_comment>username_1: You can look to the Python standard library for a hint of what is most Pythonic.
The [decimal module](https://github.com/python/cpython/blob/3.6/Lib/decimal.py) and re module both use named constant strings rather than integer constants.
In Python 3, the [*enum*](https://docs.python.org/3/library/enum.html#module-enum) module provides IntEnum so that you get the intelligibility of readable names and the benefits of ints (can be sequenced and can do math such as bitwise operations).
Upvotes: 2 <issue_comment>username_2: There are hundreds of answer, depending on too many factors.
But if in doubt, refer to the [Zen of Python](https://www.python.org/dev/peps/pep-0020/):
>
> Explicit is better than implicit.
>
>
>
and
>
> Readability counts.
>
>
>
may suggest to go for
```
slot_1 = 'med kit'
if slot_1 == 'med kit':
print('med kit ready')
```
since it's more readable
Upvotes: 2 [selected_answer]<issue_comment>username_3: If you are just interested in speed there is basically no significant difference with one if statement but using strings will accumulate if you have lots of equality tests. Here is a test on 10,000,000 runs:
```
import time
N = 10000000
# String test
print('String Test')
t0 = time.clock()
slot_1 = 'med kit'
g = 0
for x in range(N):
if slot_1 == 'med kit':
g = g + 1
t1 = time.clock()
total = t1-t0
print("%.8F seconds\n" % total)
# Integer test
print('Integer Test')
t0 = time.clock()
slot_1 = 0
g = 0
for x in range(N):
if slot_1 == 0:
g = g + 1
t1 = time.clock()
total = t1-t0
print("%.8F seconds\n" % total)
```
Output:
```
String Test
0.72845136 seconds
Integer Test
0.69446039 seconds
```
Upvotes: 1
|
2018/03/17
| 500 | 2,232 |
<issue_start>username_0: I am new to android studio and my problem might sound pretty simple but I do badly need a solution.
I need to check whether my static string's value is equal to 1 before executing a particular block of code. It updates every 500ms. It goes from 1 to 16 and then back to 1.
I tried countdowntimer but it didn't work. Please help.
Edit:This is what i am trying.
```
store=findViewById(R.id.store);
store.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
for (int i = 1; i <= 16; i++){
String m=s;
if (m.equals("1")) {
store();
break;
} else {
new Timer().schedule(new TimerTask() {
@Override
public void run() {
// this code will be executed after 2 seconds
}
}, 500);
}
}
}
});
```<issue_comment>username_1: So essentially what you want to do is, you want to observe the value right?
What you can do instead is, every time you change the value, you invoke a callback, or already explicitly call your block of code.
```
// this gets called every 500 ms
public static void updateMyValue(String value){
MY_STATIC_STRING = value;
callMyBlockOfCode();
}
```
Upvotes: 1 <issue_comment>username_2: This one worked for me. Thank you everyone for your time.
```
store=findViewById(R.id.store);
store.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
String m=s;
if (m.equals("1")) {
store();
} else {
int i=Integer.parseInt(m);
i=(17-i)*delay;
new CountDownTimer(i, 1) {
public void onFinish() {
store();
}
public void onTick(long millisUntilFinished) {
}
}.start();
}
}
});
```
Upvotes: 1 [selected_answer]
|
2018/03/17
| 402 | 1,757 |
<issue_start>username_0: I was using Facebook login API since last 8 months and was working fine. Today, without changing any settings or any code the API is returning error.
```
Graph returned an error: Can't load URL: The domain of this URL isn't included in the app's domains. To be able to load this URL, add all domains and sub-domains of your app to the App Domains field in your app settings.
```
I don't understand what happened, NOTHING has been changed.
I cross-checked all settings and domain name is also same. App ID and Client secret is also correct.<issue_comment>username_1: So essentially what you want to do is, you want to observe the value right?
What you can do instead is, every time you change the value, you invoke a callback, or already explicitly call your block of code.
```
// this gets called every 500 ms
public static void updateMyValue(String value){
MY_STATIC_STRING = value;
callMyBlockOfCode();
}
```
Upvotes: 1 <issue_comment>username_2: This one worked for me. Thank you everyone for your time.
```
store=findViewById(R.id.store);
store.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
String m=s;
if (m.equals("1")) {
store();
} else {
int i=Integer.parseInt(m);
i=(17-i)*delay;
new CountDownTimer(i, 1) {
public void onFinish() {
store();
}
public void onTick(long millisUntilFinished) {
}
}.start();
}
}
});
```
Upvotes: 1 [selected_answer]
|
2018/03/17
| 632 | 2,521 |
<issue_start>username_0: I have a view that shows the details of a specific conference.
Each conference can have multiple registration types. The registration type table has columns like: name, min\_participants, max\_participants, etc.
In this details conference page Im showing the registration types info of that conference with code below. The registration type name is working fine but then I have a select menu so the user can select how many participants want to register for each registration type. And Im in doubt how to do this part. Because, a user can only select values between the minimum and maximum participants that exist in the database columns min\_participants and max\_participants.
So for example for the conference with id "1", when the user accesses "app.test/conf/1", it appears a registration type "general" that has "min\_participants" column as "1" and "max\_participants" as "3" in the database. So in the select menu should only appear the values "1, 2, 3".
Do you know how to do that?I have this select menu static for now because I'm not understanding how to do that. Maybe with an accessor method "quantityBetweenAttribute" but I'm not understanding how the logic should be in the method.
```
@foreach($registration_types as $rtype)
- {{$rtype->name}}
1
2
3
@endforeach
```
**Conference model:**
```
class Conference extends Model
{
// A conference has many registration types
public function registrationTypes(){
return $this->hasMany('App\RegistrationType', 'conference_id');
}
}
public function quantityBetweenAttribute() {
$registrationTypes = $this->registrationTypes();
}
```
**RegistrationType Model:**
```
class RegistrationType extends Model
{
public function conference(){
return $this->belongsTo('App\Conference');
}
}
```<issue_comment>username_1: I Think this might work:
```
@foreach($registration_types as $rtype)
- {{$rtype->name}}
@for($i = $rtype->min\_participants; $i <= $rtype->max\_participants; $i++)
{{ $i }}
@endfor
@endforeach
```
Note that you could optionally use the `{{ $i == $selectedType ? 'selected' : '' }}` to set the selected state by passing the variable $selectedType to your view.
Hope it helps :)
Upvotes: 2 [selected_answer]<issue_comment>username_2: You could add a for loop:
```
@foreach($registration_types as $rtype)
- {{$rtype->name}}
@for ($i = $rtype->min\_participants; $i <= $rtype-> max\_participants; $i++)
{{ $i }}
@endfor
@endforeach
```
Upvotes: 0
|
2018/03/17
| 518 | 2,149 |
<issue_start>username_0: I have a function to get the width of the container. This function gets called every time, when the window changes the size. In this function I set the state "containerWidth" to the current width of the container.
I need the width, for the function getNumberOfItems which calculates how many items fit into the container. But when I change the window size and set the state "containerWidth" the function "getNumberofItems" doesn't update. How can I achieve that the function "getNumberOfItems" gets called every time when the window size changes (with the new state containerWidth)?
Thanks for your help
```
getContainerWidth() {
let width = document.getElementsByClassName('container')[0].offsetWidth + 'px';
this.setState({
containerWidth: width,
});
}
getNumberOfItems(){
let width = this.state.containerWidth;
let postWidth = this.state.postWidth;
...
...
...
return numberOfItems;
}
componentDidMount() {
window.addEventListener("resize", this.getContainerWidth.bind(this));
}
componentWillMount() {
this.getContainerWidth.bind(this);
this.getNumberOfItems();
}
componentWillUnmount() {
window.removeEventListener('resize', this.getNumberOfItems.bind(this));
}
```<issue_comment>username_1: Since `componentWillMount` is only called one before the components mounts, the `getNumberOfItems` function is not being called on `containerWidth` change, what you can do is call it from the `componentDidUpdate` function like
```
componentDidUpdate(prevProps, prevState) {
if(this.state.containerWidth !== prevState.containerWidth) {
this.getNumberOfItems();
}
}
getNumberOfItems(){
let width = this.state.containerWidth;
let postWidth = this.state.postWidth;
...
...
...
return numberOfItems;
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: setState method takes a callback method as the second argument. So you could do:
```
getContainerWidth() {
let width = document.getElementsByClassName('container')[0].offsetWidth + 'px';
this.setState({
containerWidth: width,
},this.getNumberOfItems);
```
}
Upvotes: 1
|
2018/03/17
| 1,147 | 3,720 |
<issue_start>username_0: I want to use for example this array of pointers to functions, without using STL.
That array is an array of pointers that I call functions `OptionA`, `OptionB` and so on.
```
int(*Functions[4])();
Functions[0] = OpionA;
Functions[1] = OptionB;
Functions[2] = OptionC;
Functions[0] = Exit;
```
Now if I write inside the function where I have my array
```
Functions[0];
```
I want to have called function 'OptionA' where it has been defined before for example like this:
```
int OptionA()
{
cout << "OPTION A";
_getch();
return 0;
}
```
Is it possible to do this without STL?
If not, I would like to know how to do it with STL.<issue_comment>username_1: I think that what you are looking for is
[How to initialize a vector of pointers](https://stackoverflow.com/questions/9090680/how-to-initialize-a-vector-of-pointers)
Once your vector is initialize you can send it to a function like a normal data type.
Example:
```
std::vector array\_of\_pointers{ new int(0), new int(1), new int(17) };
function(array\_of\_pointers);
```
In the declaration of the function
```
void function(std::vector array\_of\_pointers);
```
I hope this answer your question.
Upvotes: 0 <issue_comment>username_2: You can create and pass arrays of function pointers like any other types. It's easiest if you have a type alias (my example leverages `using`, but `typedef` will also work).
```
#include
using Function = int (\*)(int, int);
int add(int a, int b) {
return a + b;
}
int sub(int a, int b) {
return a - b;
}
void do\_stuff(int a, int b, Function \* fns, int cnt) {
for(auto i = 0; i < cnt; ++i) {
std::cout << "Result " << i << " = " << fns[i](a, b) << '\n';
}
}
int main() {
Function fns[2] = { add, sub };
do\_stuff(10, 7, fns, 2);
return 0;
}
```
Output:
```
Result 0 = 17
Result 1 = 3
```
Upvotes: 2 <issue_comment>username_3: In C and C++, arrays are second-class. They cannot be passed by value by themselves, only if somehow wrapped.
As a first step, the questions you have to decide are:
1. Does your array have a fixed length?
2. And do you have to pass it *by value* or can you pass it *by reference*?
If you have to pass it by value, is that a choice you want the caller to make, or the callee to impose? In the first case, pass it by reference.
If you pass the array by reference, nothing can beat using a [`gsl::span`](https://stackoverflow.com/questions/45723819/what-is-a-span-and-when-should-i-use-one), unless you pass multiple sequences all having intrinsically the same length, in which case passing pointers and a single length-argument is more efficient and maybe comfortable.
If you pass an array of variable length by value, try to use a [`std::vector`](http://en.cppreference.com/w/cpp/container/vector). That's also the go-to type to pass a by-ref argument as if by-value.
Otherwise (array of fixed length, by value), nothing beats [`std::array`](http://en.cppreference.com/w/cpp/container/array).
Upvotes: 0 <issue_comment>username_4: If `p` is a pointer to a function, which receives no parameters, you should call it by this syntax:
```
p();
```
So, if `array` is an array of pointers to functions, you should call one of them using the same syntax idea:
```
array[0]();
```
Here the parentheses are important; they say "call this function, and pass no parameters to it". If you have no parentheses
```
array[0];
```
this means "select this function from the array, but do nothing with it".
It's a useless expression, like if you have an integer `x`, then `x * 5` means "multiply `x` by 5 and do nothing with the result" (useless), while `x *= 5` means "multiply `x` by 5 and replace `x` with the result".
Upvotes: 0
|
2018/03/17
| 451 | 1,252 |
<issue_start>username_0: In my MySql query I need to check something like:
```
... WHERE DATE_ADD(IF(Date1>Date2, Date1, Date2), INTERVAL 3 DAY)
```
But apparently this doesn't work, this condition always return true for some reason I don't know.
What am I doing wrong? Isn't it possible to use IF inside a DATE\_ADD function?<issue_comment>username_1: could using a case when eg:
```
where (case when Date1>Date2 then DATE_ADD( Date1, INTERVAL 3 DAY)
else DATE_ADD( Date2, INTERVAL 3 DAY) END) <
```
or using if
```
WHERE IF(Date1>Date2, DATE_ADD( Date1, INTERVAL 3 DAY) , DATE_ADD( Date2, INTERVAL 3 DAY))
```
Upvotes: 0 <issue_comment>username_2: Or use it without function `DATE_ADD`
```
WHERE (IF(Date1 > Date2, Date1, Date2) + INTERVAL 3 DAY) < CURDATE()
```
**demo with select**
<http://rextester.com/KWR80225>
Upvotes: 2 [selected_answer]<issue_comment>username_3: Well, I try'd the following
```
set @b = date'2017/5/1';
```
set @a = date'2018/5/1';
select date\_add(if (@a>@b, @a,@b), interval 3 day) < curdate() as u;
and the answer was 0; I try'd it again after the change
set @a = date'2018/1/1';
and the answer was 1, all as one would like.
What are the ranges of your "Date1" and "Date2"?
Upvotes: 0
|
2018/03/17
| 688 | 2,129 |
<issue_start>username_0: How do I install the anaconda / miniconda without prompts on Linux command line?
Is there a way to pass `-y` kind of option to agree to the T&Cs, suggested installation location etc. by default?<issue_comment>username_1: AFAIK [`pyenv`](https://github.com/pyenv/pyenv) let you install `anaconda`/`miniconda`
(after successful instalation)
```
pyenv install --list
pyenv install miniconda3-4.3.30
```
Upvotes: 0 <issue_comment>username_2: can be achieved by `bash miniconda.sh -b` (thanks [@darthbith](https://stackoverflow.com/questions/49338902/how-to-silently-install-anaconda-miniconda-on-linux?noredirect=1#comment85688510_49338902))
The command line usage for this can only be seen with `-h` flag but not `--help`, so I missed it.
To install the anaconda to another place, use the `-p` option:
```
bash anaconda.sh -b -p /some/path
```
Upvotes: 6 [selected_answer]<issue_comment>username_3: For a quick installation of miniconda silently I use a wrapper
script [script](https://gist.github.com/username_3kazandjian/cce01cf3e15c0b41c1c4321245a99096) that can be executed from the terminal without
even downloading the script. It takes the installation destination path
as an argument (in this case `~/miniconda`) and does some validation too.
```
curl -s https://gist.githubusercontent.com/username_3kazandjian/cce01cf3e15c0b41c1c4321245a99096/raw/03c86dae9a212446cf5b095643854f029b39c921/miniconda_installer.sh | bash -s -- ~/miniconda
```
Upvotes: 0 <issue_comment>username_4: Silent installation can be done like this, but it doesn't update the `PATH` variable so you can't run it after the installation with a short command like `conda`:
```
cd /tmp/
curl -LO https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh -b -u
```
Here `-b` means batch/silent mode, and `-u` means update the existing installation of Miniconda at that path, rather than failing.
You need to run additional commands to initialize `PATH` and other shell init scripts, e.g. for Bash:
```
source ~/miniconda3/bin/activate
conda init bash
```
Upvotes: 1
|
2018/03/17
| 816 | 2,395 |
<issue_start>username_0: I have this code:
```
frames = input("Enter a number")
lists = [[] for i in range(int(frames))]
```
My attempt:
```
for i in lists:
lists[0].insert(0,"Page Frame")
```
But that didn't give me the correct output
And I want to insert a string into the first index of each of the lists. For example: I want to insert `"Page Frame"`:
```
["Page Frame",0,0,0,0]
["Page Frame",0,0,0,0]
["Page Frame",0,0,0,0]
["Page Frame",0,0,0,0]
["Page Frame",0,0,0,0]
```<issue_comment>username_1: AFAIK [`pyenv`](https://github.com/pyenv/pyenv) let you install `anaconda`/`miniconda`
(after successful instalation)
```
pyenv install --list
pyenv install miniconda3-4.3.30
```
Upvotes: 0 <issue_comment>username_2: can be achieved by `bash miniconda.sh -b` (thanks [@darthbith](https://stackoverflow.com/questions/49338902/how-to-silently-install-anaconda-miniconda-on-linux?noredirect=1#comment85688510_49338902))
The command line usage for this can only be seen with `-h` flag but not `--help`, so I missed it.
To install the anaconda to another place, use the `-p` option:
```
bash anaconda.sh -b -p /some/path
```
Upvotes: 6 [selected_answer]<issue_comment>username_3: For a quick installation of miniconda silently I use a wrapper
script [script](https://gist.github.com/username_3kazandjian/cce01cf3e15c0b41c1c4321245a99096) that can be executed from the terminal without
even downloading the script. It takes the installation destination path
as an argument (in this case `~/miniconda`) and does some validation too.
```
curl -s https://gist.githubusercontent.com/username_3kazandjian/cce01cf3e15c0b41c1c4321245a99096/raw/03c86dae9a212446cf5b095643854f029b39c921/miniconda_installer.sh | bash -s -- ~/miniconda
```
Upvotes: 0 <issue_comment>username_4: Silent installation can be done like this, but it doesn't update the `PATH` variable so you can't run it after the installation with a short command like `conda`:
```
cd /tmp/
curl -LO https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh -b -u
```
Here `-b` means batch/silent mode, and `-u` means update the existing installation of Miniconda at that path, rather than failing.
You need to run additional commands to initialize `PATH` and other shell init scripts, e.g. for Bash:
```
source ~/miniconda3/bin/activate
conda init bash
```
Upvotes: 1
|
2018/03/17
| 779 | 2,536 |
<issue_start>username_0: I have old project with Pre-loaded database in Assets folder. The project have [SQLiteOpenHelper](https://developer.android.com/reference/android/database/sqlite/SQLiteOpenHelper.html) implemented for database operations. But now to update app I want to move my project to [Room](https://developer.android.com/topic/libraries/architecture/room.html) database library.
So my question is, Is there any method or feature available in Room library with I can use preloaded db file in app at runtime.
Or Is any way I can load db first and then from Room library I can directly execute queries on that db?<issue_comment>username_1: AFAIK [`pyenv`](https://github.com/pyenv/pyenv) let you install `anaconda`/`miniconda`
(after successful instalation)
```
pyenv install --list
pyenv install miniconda3-4.3.30
```
Upvotes: 0 <issue_comment>username_2: can be achieved by `bash miniconda.sh -b` (thanks [@darthbith](https://stackoverflow.com/questions/49338902/how-to-silently-install-anaconda-miniconda-on-linux?noredirect=1#comment85688510_49338902))
The command line usage for this can only be seen with `-h` flag but not `--help`, so I missed it.
To install the anaconda to another place, use the `-p` option:
```
bash anaconda.sh -b -p /some/path
```
Upvotes: 6 [selected_answer]<issue_comment>username_3: For a quick installation of miniconda silently I use a wrapper
script [script](https://gist.github.com/username_3kazandjian/cce01cf3e15c0b41c1c4321245a99096) that can be executed from the terminal without
even downloading the script. It takes the installation destination path
as an argument (in this case `~/miniconda`) and does some validation too.
```
curl -s https://gist.githubusercontent.com/username_3kazandjian/cce01cf3e15c0b41c1c4321245a99096/raw/03c86dae9a212446cf5b095643854f029b39c921/miniconda_installer.sh | bash -s -- ~/miniconda
```
Upvotes: 0 <issue_comment>username_4: Silent installation can be done like this, but it doesn't update the `PATH` variable so you can't run it after the installation with a short command like `conda`:
```
cd /tmp/
curl -LO https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh -b -u
```
Here `-b` means batch/silent mode, and `-u` means update the existing installation of Miniconda at that path, rather than failing.
You need to run additional commands to initialize `PATH` and other shell init scripts, e.g. for Bash:
```
source ~/miniconda3/bin/activate
conda init bash
```
Upvotes: 1
|
2018/03/17
| 445 | 1,355 |
<issue_start>username_0: Given table1 with fields **person1\_id, person2\_id and person3\_id**
and table 2 with fields **fname, lname, id**, I want to return the names from table2 for person1\_id, person2\_id, and person3\_id, hopefully in one query.
I understand how to do a join to retrieve one name, but not multiple names.
Thanks<issue_comment>username_1: I think help you this sql query...
```
SELECT table2.fname, table2.lname, table2.id
FROM table2
FULL JOIN table1
ON table2.id=table1.person1_id OR table2.id=table1.person2_id OR table2.id=table1.person3_id ORDER BY table2.id;
```
Upvotes: 0 <issue_comment>username_2: You can use multiple join for 1 table.
Please try this, and let me know if this could solved your issue.
```
select e.*,
students1.fname, students1.lname,
students2.fname, students2.lname,
students3.fname, students3.lname
from events as e
left join students as students1 on students1.stu_id1 = e.stu_id
left join students as students2 on students2.stu_id2 = e.stu_id
left join students as students3 on students3.stu_id3 = e.stu_id
```
Upvotes: 1 <issue_comment>username_3: Try this,
Based on above answer
```
SELECT table2.fname, table2.lname, table2.id
FROM table2
INNER JOIN table1
ON
(table2.id=table1.person1_id OR table2.id=table1.person2_id OR table2.id=table1.person3_id)
ORDER BY table2.id;
```
Upvotes: 0
|
2018/03/17
| 1,087 | 2,760 |
<issue_start>username_0: Let me explain my problem using a dummy example.
This is file A -
```
1 10 20 aa
2 30 40 bb
3 60 70 cc
. .. .. ..
```
and This is file B -
```
10 15 xx yy mm
21 29 mm nn ss
11 18 rr tt yy
69 90 qq ww ee
.. .. .. .. ..
```
I am trying to merge these files A and B such that there exist some overlapping between A's row and B's row.
**Overlapping between A's row and B's row, in my case:**
there is something common between range starting from $2 to $3 for A's row and range starting from $1 to $2 for B's row. in above example, there is overlapping between range(10,20) and range(10,15).
Here range(10,20) = [10,11,12,13,14,15,16,17,18,19] and
range(10,15) = [10,11,12,13,14]
So the expected output is -
```
1 10 20 aa 10 15 xx
1 10 20 aa 11 18 rr
3 60 70 cc 69 90 qq
```
I tried this way (using and awk):
```
for peak in State.peaks:
i = peak[-1]
peak = peak[:-1]
a = peak[1]
b = peak[2]
d = State.delta
c = ''' awk '{id=%d;delta=%d;a=%d;b=%d;x=%s;y=%s;if((x<=a&&y>a)||(x<=b&&y>b) || (x>a&&y<=b)) print id" "$7" "$3-$2} ' %s > %s ''' % (i, d, a, b, "$2-d", "$3+d", State.fourD, "file"+str(name))
os.system(c)
```
Wanted to remove python part completely as it is taking much time.<issue_comment>username_1: `awk` to the rescue!
```
$ awk 'function intersect(x1,y1,x2,y2)
{return (x1>=x2 && x1=x1 && x2
```
Note that
```
(x1>=x2 && x1=x1 && x2=x2 || (x2>=x1 && x2=x1 && x2=x2 || x2>=x1) && (x1>=x2 || x2
```
which is equivalent to @username_2's condition, which is more compact and more efficient, even though not trivial at first sight.
Upvotes: 1 <issue_comment>username_2: This Awk script does the job:
```
NR == FNR { record[NR] = $0; lo[NR] = $2; hi[NR] = $3; nrecs = NR; next }
NR != FNR { # Overlap: lo[A] < hi[B] && lo[B] < hi[A]
for (i = 1; i <= nrecs; i++)
{
if (lo[i] < $2 && $1 < hi[i])
print record[i], $1, $2, $3
}
}
```
I saved it as `range-merge-53.awk` (`53` is simply a random double-digit prime). I created `file.A` and `file.B` from your sample data, and ran:
```
$ awk -f range-merge-53.awk file.A file.B
1 10 20 aa 10 15 xx
1 10 20 aa 11 18 rr
3 60 70 cc 69 90 qq
$
```
The key is the 'overlap' condition, which must exclude the high value of each range — often denoted `[lo..hi)` for an open-closed range.
It would be possible to omit either the `next` or the `NR != FNR` condition (but not both) and the code would work as well.
See also [Determine whether two date ranges overlap](https://stackoverflow.com/questions/325933/) — the logic of ranges applies to dates and integers and floating point, etc.
Upvotes: 1 [selected_answer]
|
2018/03/17
| 858 | 3,341 |
<issue_start>username_0: Currently, I generate an invitation key that gets embedded into the login URL and sent to users:
eg: <https://localhost:44338/account/login/mykey>
I also enabled google authentication
[](https://i.stack.imgur.com/4Fiie.png)
How do round trip "mykey" when the user clicks on the google link to have it available when the user gets created in ExternalLoginConfirmation (in the default asp.net core setup).
EDIT: For example what code should I add to this method
```
public IActionResult ExternalLogin(string provider, string key, string returnUrl = null)
{
var redirectUrl = Url.Action(nameof(ExternalLoginCallback), "Account", new { returnUrl });
var properties = _signInManager.ConfigureExternalAuthenticationProperties(provider, redirectUrl);
return Challenge(properties, provider);
}
```<issue_comment>username_1: This is actually more of an OAuth question than something specific to ASP.NET. It is done by encoding a complex object and passing it to the OAuth provider in the `state` query paremeter.
Take for example this JSON object:
```
{
"validation": "aGUID",
"key": "invitationKey"
}
```
If you Url Encode this value, you can pass it as the `state`:
```
...&state=%7B%22validation%22%3A+%22aGUID%22%2C%22key%22%3A+%22invitationKey%22%7D+
```
When you receieve the response from the provider, you decode the `state` value and deserialize it back into an object. You can then validate the GUID in `validation` matches what you sent and process the invitation key from the `key` property.
Upvotes: 1 <issue_comment>username_2: username_1's answer is correct, but as your follow-up question notes, you're trying to figure out how to do this with ASP.NET Core. The solution is to wire up some of the [`OpenIdConnectEvents`](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.authentication.openidconnect.openidconnectevents?view=aspnetcore-2.0) which allows you to intercept various parts of the OIDC flow. This goes in your setup method where you register for OIDC:
```cs
.AddOpenIdConnect("oidc", options =>
{
options.SignInScheme = "Cookies";
options.Authority = appconfig["OidcAuthority"];
options.ClientId = appconfig["OidcClientId"];
options.ClientSecret = appconfig["OidcClientSecret"];
// other config omitted
options.Events = new OpenIdConnectEvents
{
OnRedirectToIdentityProvider = context =>
{
context.Properties.Items.Add("invitationkey", "somevalue");
return Task.FromResult(null);
},
OnTicketReceived = context =>
{
var invitation = context.Properties.Items["invitationkey"];
return Task.FromResult(null);
}
}
};
```
There are quite a few events available. One of my sytems allows our users to auth via Google, but we use it through IdentityServer4, so offhand I don't know the specific response event. You'll have to review them to figure out which is appropriate for the type of flow you're using and how your identity provider responds, but this should get you on the right track.
Since the return-trip event is defined in startup, you'll have to come up with a way to "share" that value to other parts of your application (inject some type of simple caching service, for example).
Upvotes: 0
|
2018/03/17
| 1,797 | 6,490 |
<issue_start>username_0: Hey Everyone, I am mentally stuck
I have a list of objects retrieved from a Web API that has three values that consist of a parent ID and a string value and a row ID
IE:
CategoryID Name ParentID
1 Tools 0
2 Hammer 1
3 ScrewDriver 1
4 Phillips 3
5 Standard 3
6 #2 4
7 Torx 3
8 #15 7
etc.
This needs to be put into a simple list object that consists of a ParentID and a concatenated string of the immediate category name and the parent id
CategoryID FullCategoryName
0 Tools
2 Tools/Hammer
3 Tools/ScrewDriver
4 Tools/ScrewDriver/Phillips
5 Tools/ScrewDriver/Standard
6 Tools/ScrewDriver/Phillips/#2
7 Tools/ScrewDriver/Torx
8 Tools/ScrewDriver/Torx/#15
As I hope you are able to see is that I need the categoryID and the full path based off the parentID with a slash.
API Called Class
```
public class Categories_Web
{
public string CategoryName { get; set; }
public int CategoryParent { get; set; }
public int CategoryID { get; set; }
}
```
Simplified Class with concatenated names
```
public class WebCategoriesSimple
{
public int CategoryID { get; set; }
public string CategoryName { get; set; }
}
```
I hope that this makes sense and thanks for your help!<issue_comment>username_1: This is one of those problems that I think LINQ is great for. I'm having a bit of trouble understanding the format of your data. You can use a projection with LINQ, though, to project the results of the original list into a new list containing either a new object based on a class definition or a new anonymous class.
Projecting into a new anonymous class, using LINQ, would look something like the following. Keep in mind that we don't know your object names, because you didn't provide that information, so you may need to extrapolate a bit from my example.
(Note that I reread your post and I think I understand what your original objects may be called, so the post has been edited to use your class names.)
```
var newListOfStuff = originalListOfStuff.Select(s => new {
CategoryID = s.CategoryID,
CategoryName = $"Tools/{s.CategoryName}/#{s.CategoryParent}"
});
```
The string is created using string interpolation, which is new in C#. If your version of C# doesn't support that, you can use string.Format instead, like this:
```
string.Format("Tools/{0}/#{1}", s.CategoryName, s.CategoryParent);
```
With that new list, you can loop through it and use properties like this:
```
foreach(var item in newListOfStuff)
{
Console.WriteLine(item.CategoryID);
Console.WriteLine(item.CategoryName);
}
```
Here is a similar answer for references:
<https://stackoverflow.com/a/9885766/3096683>
Upvotes: 0 <issue_comment>username_2: What you have is a hierarchy, and whenever you have that you can consider Recursion - a method that calls itself. You don't always want to use recursion, but for manageable sized lists or tail call optimized recursive methods it is a powerful tool.
Here is demo code that outputs:
```
Category: 1, Hierarchy: Tools
Category: 2, Hierarchy: Tools/Hammer
Category: 3, Hierarchy: Tools/ScrewDriver
Category: 4, Hierarchy: Tools/ScrewDriver/Phillips
Category: 5, Hierarchy: Tools/ScrewDriver/Standard
Category: 6, Hierarchy: Tools/ScrewDriver/Phillips/#2
Category: 7, Hierarchy: Tools/ScrewDriver/Torx
Category: 8, Hierarchy: Tools/ScrewDriver/Torx/#15
```
(Note, I don't think your sample output of "0, Tools" is correct)
This program creates a hard-coded list of `ProductDef`. Yours probably comes from a database or something.
Then it creates an empty list of `ProductHierarchy` which will be populated as the recursive operation runs.
It then kicks off the recursion with the first call to `Build()`. Within that method, it will call itself while the item passed in has a parent in the hierarchy.
When there are no more parents, it adds the item to the list of `ProductHierarchy`.
```
void Main()
{
List pdefs = new List{
new ProductDef{Category = 1, Product = "Tools", ParentCategory = 0},
new ProductDef{Category = 2, Product = "Hammer", ParentCategory = 1},
new ProductDef{Category = 3, Product = "ScrewDriver", ParentCategory = 1},
new ProductDef{Category = 4, Product = "Phillips", ParentCategory = 3},
new ProductDef{Category = 5, Product = "Standard", ParentCategory = 3},
new ProductDef{Category = 6, Product = "#2", ParentCategory = 4},
new ProductDef{Category = 7, Product = "Torx", ParentCategory = 3},
new ProductDef{Category = 8, Product = "#15", ParentCategory = 7}
};
//This will get filled as we go
List phlist = new List();
//kick off the recursion
foreach (var element in pdefs)
{
Build(element, pdefs, element.Product, element.Category, phlist);
}
//do stuff with your list
foreach (ProductHierarchy ph in phlist)
{
Console.WriteLine(ph.ToString());
}
}
class ProductDef
{
public int Category { get; set; }
public string Product { get; set; }
public int ParentCategory { get; set; }
}
class ProductHierarchy
{
public int Category { get; set; }
public string Hierarchy { get; set; }
public override string ToString()
{
return $"Category: {Category}, Hierarchy: {Hierarchy}";
}
}
void Build(ProductDef def, List lst, string fullname, int cat, List phlist)
{
string fullprodname = fullname;
//obtain the parent category of product
var parent = lst.FirstOrDefault(l => l.Category == def.ParentCategory);
if (parent != null)
{
fullprodname = $"{parent.Product}/{fullprodname}";
//recurse the call to see if the parent has any parents
Build(parent, lst, fullprodname, cat, phlist);
}
else
{
//No further parents found, add it to a list
phlist.Add( new ProductHierarchy { Category = cat, Hierarchy = fullprodname });
}
}
```
Upvotes: 2 [selected_answer]
|
2018/03/17
| 987 | 3,247 |
<issue_start>username_0: I am using opencv::solvePnP to return a camera pose. I run PnP, and it returns the rvec and tvec values.(rotation vector and position).
I then run this function to convert the values to the camera pose:
```
void GetCameraPoseEigen(cv::Vec3d tvecV, cv::Vec3d rvecV, Eigen::Vector3d &Translate, Eigen::Quaterniond &quats)
{
Mat R;
Mat tvec, rvec;
tvec = DoubleMatFromVec3b(tvecV);
rvec = DoubleMatFromVec3b(rvecV);
cv::Rodrigues(rvec, R); // R is 3x3
R = R.t(); // rotation of inverse
tvec = -R*tvec; // translation of inverse
Eigen::Matrix3d mat;
cv2eigen(R, mat);
Eigen::Quaterniond EigenQuat(mat);
quats = EigenQuat;
double x_t = tvec.at(0, 0);
double y\_t = tvec.at(1, 0);
double z\_t = tvec.at(2, 0);
Translate.x() = x\_t \* 10;
Translate.y() = y\_t \* 10;
Translate.z() = z\_t \* 10;
}
```
This works, yet at some rotation angles, the converted rotation values flip randomly between positive and negative values. Yet, the source `rvecV` value does not. I assume this means I am going wrong with my conversion. How can i get a stable Quaternion from the PnP returned cv::Vec3d?
EDIT: This seems to be Quaternion flipping, as mentioned here:
[Quaternion is flipping sign for very similar rotations?](https://stackoverflow.com/questions/42428136/quaternion-is-flipping-sign-for-very-similar-rotations#42429341)
Based on that, i have tried adding:
```
if(quat.w() < 0)
{
quat = quat.Inverse();
}
```
But I see the same flipping.<issue_comment>username_1: Both `quat` and `-quat` represent the same rotation. You can check that by taking a unit quaternion, converting it to a rotation matrix, then doing
```
quat.coeffs() = -quat.coeffs();
```
and converting that to a rotation matrix as well.
If for some reason you always want a positive `w` value, negate all coefficients if `w` is negative.
Upvotes: 2 <issue_comment>username_2: The sign should not matter...
... rotation-wise, as long as *all four fields* of the 4D quaternion are getting flipped. There's more to it explained here:
[Quaternion to EulerXYZ, how to differentiate the negative and positive quaternion](https://stackoverflow.com/questions/30834254/quaternion-to-eulerxyz-how-to-differentiate-the-negative-and-positive-quaternio)
Think of it this way:
[Angle/axis both flipped mean the same thing](https://i.stack.imgur.com/3aM7i.png)
and mind the clockwise to counterclockwise transition much *like* in a mirror image.
There may be convention to keep the `quat.w()` or `quat[0]` component positive and change other components to opposite accordingly. Assume `w = cos(angle/2)` then setting `w > 0` just means: I want `angle` to be within the `(-pi, pi)` range. So that the -270 degrees rotation becomes +90 degrees rotation.
Doing the `quat.Inverse()` is probably not what you want, because this creates a rotation in the opposite direction. That is `-quat != quat.Inverse()`.
**Also:** check that both systems have the same handedness ([chirality](https://en.wikipedia.org/wiki/Chirality_(mathematics)))! Test if your rotation matrix determinant is +1 or -1.
(sry for the image link, I don't have enough reputation to embed them).
Upvotes: 1
|
2018/03/17
| 601 | 1,955 |
<issue_start>username_0: im a little new to SSIS coming from using Power Query in Excel a lot. Is it possible to create a conditional column in SSIS, similar to how you can in Power Query.
For example in Power Query you can create a conditional column to say IF column [FileName] contains USA, Canada, United States, America = "North America" else "null". It would create a new column with North America and anything that doesn't meet the criteria, it would be a null. Is something like this possible in SSIS. I have tried used the Substring and Findstring however it doesnt do exactly what I need.<issue_comment>username_1: Sure. Within a data flow task (aka data pump) there is a Derived Column component where you can add columns to your data flow and include a custom expression.
Upvotes: 0 <issue_comment>username_2: Basically you're looking for a [ternary operator](https://learn.microsoft.com/en-us/sql/integration-services/expressions/conditional-ssis-expression) condition like this in `derived column`:
```
(FINDSTRING([Name], "USA", 1) > 0) || (FINDSTRING([Name], "Canada", 1) > 0) || (FINDSTRING([Name], "United States", 1) > 0) || (FINDSTRING([Name], "America", 1) > 0) ? "North America" : NULL(DT_WSTR, 13)
```
To explain it, I am indenting it as:
```
(FINDSTRING([Name], "USA", 1) > 0) ||
(FINDSTRING([Name], "Canada", 1) > 0) ||
(FINDSTRING([Name], "United States", 1) > 0) ||
(FINDSTRING([Name], "America", 1) > 0) ? "North America" :
NULL(DT_WSTR, 13)
```
To modify it further, you can use the following guide:
* For Contains `like '%value%'` : `FINDSTRING(col, "value", 1) > 0`
* Start with `like 'value%'` : `FINDSTRING(col, "value", 1) == 1`
* End with `like '%value'` : `REVERSE(LEFT(REVERSE(col), X)) == "value"`
source: [sqlservercentral blog](http://www.sqlservercentral.com/blogs/toddmcdermid/2012/05/22/basic-ssis-equivalents-to-t-sqls-like/)
Upvotes: 3 [selected_answer]
|
2018/03/17
| 2,035 | 7,355 |
<issue_start>username_0: Learning Symfony is a tough journey and I knew posts on SO would follow, so here's my problem : classic login page, I followed [this tutorial](https://symfony.com/doc/current/security/form_login_setup.html) and before that I followed the [registration form](http://symfony.com/doc/current/doctrine/registration_form.html) tutorial.
Registration works fine, user is inserted into the database with the correct informations, the problem is the login, because no matter what I've tried, the **Invalid Credentials** error message will pop-up, I tried with multiple users but nothing, I can't find the mistake, everything seems correct, I've checked many many times. So, time to show some code I guess :
My login form :
```
...
...
...
...
```
My Controller who handles the login :
```
/**
* @Route("/login", name="login")
*
*/
public function login(Request $request, AuthenticationUtils $authenticationUtils)
{
// get the login error if there is one
$error = $authenticationUtils->getLastAuthenticationError();
// last username entered by the user
$lastUsername = $authenticationUtils->getLastUsername();
return $this->render('main/login.html.twig', array(
'last_username' => $lastUsername,
'error' => $error,
));
}
```
Basically a copy-paste from the tutorial. I've set the encored in my security.yalm file :
```
security:
# https://symfony.com/doc/current/security.html#where-do-users-come-from-user-providers
providers:
the_user_provider:
entity:
class: App\Entity\User
property: name
encoders:
App\Entity\User:
algorithm: sha512
firewalls:
dev:
pattern: ^/(_(profiler|wdt)|css|images|js)/
security: false
main:
anonymous: ~
# activate different ways to authenticate
# http_basic: true
# https://symfony.com/doc/current/security.html#a-configuring-how-your-users-will-authenticate
form_login:
login_path: login
check_path: login
csrf_token_generator: security.csrf.token_manager
# https://symfony.com/doc/current/security/form_login_setup.html
provider: the_user_provider
# Easy way to control access for large sections of your site
# Note: Only the *first* access control that matches will be used
access_control:
- { path: ^/admin, roles: ROLE_USER }
# - { path: ^/profile, roles: ROLE_USER }
```
Also, since I dont know what the problem could be, here's the portion of the registration logic that encodes the user password :
```
public function register(Request $request, UserPasswordEncoderInterface $passwordEncoder, Sluggify $sluggify) {
...
...
// 3) Encode the password (you could also do this via Doctrine listener)
$password = $passwordEncoder->encodePassword($user, $user->getPlainPassword());
$user->setPassword($password);
```
And ofcourse inside my **User** entity I've already set a plainPassword and getPlainPassword:
```
...
...
/**
* @ORM\Column(type="string", length=64)
*/
private $password;
/**
* @Assert\NotBlank()
* @Assert\Length(max=4096)
*/
private $plainPassword;
...
...
public function getPlainPassword()
{
return $this->plainPassword;
}
```
For completition here are the last rows in my **dev.log** file :
```
[2018-03-17 16:35:06] request.INFO: Matched route "login". {"route":"login","route_parameters":{"_controller":"App\\Controller\\MainController::login","_route":"login"},"request_uri":"http://1192.168.3.11:8000/login","method":"POST"} []
[2018-03-17 16:35:06] doctrine.DEBUG: SELECT t0.id AS id_1, t0.name AS name_2, t0.email AS email_3, t0.slug AS slug_4, t0.avatar AS avatar_5, t0.password AS <PASSWORD>, t0.is_author AS is_author_7, t0.is_active AS is_active_8 FROM user t0 WHERE t0.name = ? LIMIT 1 ["K3nzie"] []
[2018-03-17 16:35:06] security.INFO: Authentication request failed. {"exception":"[object] (Symfony\\Component\\Security\\Core\\Exception\\BadCredentialsException(code: 0): Bad credentials. at /home/k3nzie/projects/symfonyWebsite/vendor/symfony/security/Core/Authentication/Provider/UserAuthenticationProvider.php:88, Symfony\\Component\\Security\\Core\\Exception\\BadCredentialsException(code: 0): The presented password is invalid. at /home/k3nzie/projects/symfonyWebsite/vendor/symfony/security/Core/Authentication/Provider/DaoAuthenticationProvider.php:65)"} []
[2018-03-17 16:35:06] security.DEBUG: Authentication failure, redirect triggered. {"failure_path":"login"} []
[2018-03-17 16:35:06] request.INFO: Matched route "login". {"route":"login","route_parameters":{"_controller":"App\\Controller\\MainController::login","_route":"login"},"request_uri":"http://127.0.0.1:8000/login","method":"GET"} []
[2018-03-17 16:35:06] security.INFO: Populated the TokenStorage with an anonymous Token. [] []
[2018-03-17 16:35:07] request.INFO: Matched route "_wdt". {"route":"_wdt","route_parameters":{"_controller":"web_profiler.controller.profiler:toolbarAction","token":"62995d","_route":"_wdt"},"request_uri":"http://127.0.0.1:8000/_wdt/62995d","method":"GET"} []
```
Any suggestions? I've looked the whole day for a solution, and I'm sure it's a silly beginner error, but still...nothing.<issue_comment>username_1: I ran into the same problem. Try changing $user->getPlainPassword(). I believe that getPlainPassword is a form builder function and you have created your own form so you need to bind your variables with the request object.
In my code i did it like this.
```
$password = $passwordEncoder->encodePassword($user, $params['password']);
/**
* @Route("/register", name="user_registration")
*/
public function registerAction(Request $request, UserPasswordEncoderInterface $passwordEncoder)
{
$params = $request->request->all();
if ($request->isMethod("POST")) {
$user = new User();
$user->setFname($params["fname"]);
$user->setLname($params["lname"]);
$user->setEmail($params["email"]);
$user->setUsername($params["username"]);
$user->setCreatedAt(new \DateTime());
//$user->setPassword($params['password']);
$password = $passwordEncoder->encodePassword($user, $params['password']);
$user->setPassword($password);
$entityManager = $this->getDoctrine()->getManager();
$entityManager->persist($user);
$entityManager->flush();
return $this->redirectToRoute('login');
}
return $this->render(
'default/register.html.twig'
);
}
```
Upvotes: 0 <issue_comment>username_2: Look problem in this pice of code:
```
/**
* @ORM\Column(type="string", length=64)
*/
private $password;
```
SHA512 actually returns a string of 128 length. So the short answer is, your field only needs to be 128 characters.
You should set length at 128 or more or not set at all and it will be set at 255 by default
```
/**
* @ORM\Column(type="string")
*/
private $password;
```
And after that changes you should fix field length in your database table manualy or through migrations.
Upvotes: 2
|
2018/03/17
| 550 | 2,125 |
<issue_start>username_0: I wonder how can i get my page parts such as `sidebar` or `header` with Vue way?
currently I get them like `@include('admin.parts.header')` which is blade template, but is it possible i move them to `components (header.vue)` and get them with vue way like ?<issue_comment>username_1: I ran into the same problem. Try changing $user->getPlainPassword(). I believe that getPlainPassword is a form builder function and you have created your own form so you need to bind your variables with the request object.
In my code i did it like this.
```
$password = $passwordEncoder->encodePassword($user, $params['password']);
/**
* @Route("/register", name="user_registration")
*/
public function registerAction(Request $request, UserPasswordEncoderInterface $passwordEncoder)
{
$params = $request->request->all();
if ($request->isMethod("POST")) {
$user = new User();
$user->setFname($params["fname"]);
$user->setLname($params["lname"]);
$user->setEmail($params["email"]);
$user->setUsername($params["username"]);
$user->setCreatedAt(new \DateTime());
//$user->setPassword($params['password']);
$password = $passwordEncoder->encodePassword($user, $params['password']);
$user->setPassword($password);
$entityManager = $this->getDoctrine()->getManager();
$entityManager->persist($user);
$entityManager->flush();
return $this->redirectToRoute('login');
}
return $this->render(
'default/register.html.twig'
);
}
```
Upvotes: 0 <issue_comment>username_2: Look problem in this pice of code:
```
/**
* @ORM\Column(type="string", length=64)
*/
private $password;
```
SHA512 actually returns a string of 128 length. So the short answer is, your field only needs to be 128 characters.
You should set length at 128 or more or not set at all and it will be set at 255 by default
```
/**
* @ORM\Column(type="string")
*/
private $password;
```
And after that changes you should fix field length in your database table manualy or through migrations.
Upvotes: 2
|
2018/03/17
| 300 | 952 |
<issue_start>username_0: ```
function repeatS(srr, num) {
if (num <= 0) {
return "";
}
var result = "";
for (var i = 0; i < num; i++) {
result = +srr;
}
return result;
}
console.log(repeatS("ahfidhfd", 3));
```
**strong text**
Here is my question, the result is Nan, anyone knows what might be the problem here...<issue_comment>username_1: `result = +srr;`
should be
`result += srr;`
Upvotes: 2 <issue_comment>username_2: You use an [unary plus `+`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Arithmetic_Operators#Unary_plus) for converting a string to a number, but you need to assign the value to the left hand variable.
```js
function repeatS(srr, num) {
if (num <= 0) {
return "";
}
var result = "";
for (var i = 0; i < num; i++) {
result += srr;
}
return result;
}
console.log(repeatS("ahfidhfd", 3));
```
Upvotes: 1
|
2018/03/17
| 509 | 1,627 |
<issue_start>username_0: I'm using `jquery-ui` to sort the items on my list. At first load I have no problem coz it works fine but after I go to other page and go back to the page where my list is. That's the time I'm having trouble.
By the way, I'm using AJAX to load the content of my pages. And it seems that `jquery-ui` can't read the content that was loaded by AJAX. I need to refresh the page again so that it will work fine.
I created sample [**`here:`**](https://jsfiddle.net/junscrapers/3x4cpau4/)
My HTML Code:
```
Page 1
Page 2
* Item 1
* Item 2
* Item 3
* Item 4
```
My JS Code:
```
$( function() {
$( "#sortable" ).sortable();
$( "#sortable" ).disableSelection();
} );
function showPage1(){
$('#desp').html(
''+
'* Item 1
'+
'* Item 2
'+
'* Item 3
'+
'* Item 4
'+
'
'
);
}
function showPage2(){
$('#desp').html('### Page 2 here...');
}
```
In my example, at first load, no problem. But as you click page2 and go back to Page1, sort function will not work.<issue_comment>username_1: `result = +srr;`
should be
`result += srr;`
Upvotes: 2 <issue_comment>username_2: You use an [unary plus `+`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Arithmetic_Operators#Unary_plus) for converting a string to a number, but you need to assign the value to the left hand variable.
```js
function repeatS(srr, num) {
if (num <= 0) {
return "";
}
var result = "";
for (var i = 0; i < num; i++) {
result += srr;
}
return result;
}
console.log(repeatS("ahfidhfd", 3));
```
Upvotes: 1
|
2018/03/17
| 1,975 | 7,004 |
<issue_start>username_0: I just installed tensorflow gpu and I started to train my convolutional neural network. The problem is that my gpu usage percentage is constantly at 0% and sometimes it increases until 20%. The CPU is somewhere at 20% and the disk above 60%. I tried to test if I installed it correctly and I done some matrix multiplications, in that case, everything was allright and the GPU usage was above 90%.
```
with tf.device("/gpu:0"):
#here I set up the computational graph
```
when I run the graph I use this, so the compiler will decide if one operation has a gpu implementation or not
```
with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess:
```
I have an NVIDIA GEFORCE GTX 950m graphic card and I don't get errors at runtime. What am I doing wrong?
later edit, my computation graph
```
with tf.device("/gpu:0"):
X = tf.placeholder(tf.float32, shape=[None, height, width, channels], name="X")
dropout_rate= 0.3
training = tf.placeholder_with_default(False, shape=(), name="training")
X_drop = tf.layers.dropout(X, dropout_rate, training = training)
y = tf.placeholder(tf.int32, shape = [None], name="y")
conv1 = tf.layers.conv2d(X_drop, filters=32, kernel_size=3,
strides=1, padding="SAME",
activation=tf.nn.relu, name="conv1")
conv2 = tf.layers.conv2d(conv1, filters=64, kernel_size=3,
strides=2, padding="SAME",
activation=tf.nn.relu, name="conv2")
pool3 = tf.nn.max_pool(conv2,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding="VALID")
conv4 = tf.layers.conv2d(pool3, filters=128, kernel_size=4,
strides=3, padding="SAME",
activation=tf.nn.relu, name="conv4")
pool5 = tf.nn.max_pool(conv4,
ksize=[1, 2, 2, 1],
strides=[1, 1, 1, 1],
padding="VALID")
pool5_flat = tf.reshape(pool5, shape = [-1, 128*2*2])
fullyconn1 = tf.layers.dense(pool5_flat, 128, activation=tf.nn.relu, name = "fc1")
fullyconn2 = tf.layers.dense(fullyconn1, 64, activation=tf.nn.relu, name = "fc2")
logits = tf.layers.dense(fullyconn2, 2, name="output")
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer()
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
saver = tf.train.Saver()
hm_epochs = 100
config = tf.ConfigProto(allow_soft_placement=True)
config.gpu_options.allow_growth = True
```
the batch size is 128
```
with tf.Session(config=config) as sess:
tbWriter = tf.summary.FileWriter(logPath, sess.graph)
dataset = tf.data.Dataset.from_tensor_slices((training_images, training_labels))
dataset = dataset.map(rd.decodeAndResize)
dataset = dataset.batch(batch_size)
testset = tf.data.Dataset.from_tensor_slices((test_images, test_labels))
testset = testset.map(rd.decodeAndResize)
testset = testset.batch(len(test_images))
iterator = dataset.make_initializable_iterator()
test_iterator = testset.make_initializable_iterator()
next_element = iterator.get_next()
sess.run(tf.global_variables_initializer())
for epoch in range(hm_epochs):
epoch_loss = 0
sess.run(iterator.initializer)
while True:
try:
epoch_x, epoch_y = sess.run(next_element)
# _, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y: epoch_y})
# epoch_loss += c
sess.run(training_op, feed_dict={X:epoch_x, y:epoch_y, training:True})
except tf.errors.OutOfRangeError:
break
sess.run(test_iterator.initializer)
# acc_train = accuracy.eval(feed_dict={X:epoch_x, y:epoch_y})
try:
next_test = test_iterator.get_next()
test_images, test_labels = sess.run(next_test)
acc_test = accuracy.eval(feed_dict={X:test_images, y:test_labels})
print("Epoch {0}: Train accuracy {1}".format(epoch, acc_test))
except tf.errors.OutOfRangeError:
break
# print("Epoch {0}: Train accuracy {1}, Test accuracy: {2}".format(epoch, acc_train, acc_test))
save_path = saver.save(sess, "./my_first_model")
```
I have 9k training pictures and 3k pictures for testing<issue_comment>username_1: There are a few issues in your code that may result in low GPU usage.
1) Add a [`prefetch`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#prefetch) instruction at the end of your `Dataset` pipeline to enable the CPU to maintain a buffer of input data batches ready to move them to the GPU.
```
# this should be the last thing in your pipeline
dataset = dataset.prefetch(1)
```
2) You are using `feed_dict` to feed your model, along with `Dataset` iterators. This is not the intended way! [`feed_dict` is the slowest method of inputting data to your model and not recommended](https://www.tensorflow.org/api_guides/python/reading_data#Feeding). You should define your *model* in terms of the `next_element` outputs of the iterators.
Example:
```
next_x, next_y = iterator.get_next()
with tf.device('/GPU:0'):
conv1 = tf.layers.conv2d(next_x, filters=32, kernel_size=3,
strides=1, padding="SAME",
activation=tf.nn.relu, name="conv1")
# rest of model here...
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits,
labels=next_y)
```
Then you can call your training operation *without* using `feed_dict`, and the iterator will handle feeding data to your model behind the scenes. [Here is another related Q&A](https://stackoverflow.com/questions/49053569/tensorflow-create-minibatch-from-numpy-array-2-gb/49054290#49054290). Yout new training loop would look something like this:
```
while True:
try:
sess.run(training_op, feed_dict={training:True})
except tf.errors.OutOfRangeError:
break
```
You should only input data via `feed_dict` that your iterator does not provide, and these should typically be very lightweight.
For further tips on performance, you can refer to [this guide on TF website](https://www.tensorflow.org/versions/master/performance/datasets_performance).
Upvotes: 2 [selected_answer]<issue_comment>username_2: You could try the following code to see if tensorflow is recognizing your GPU:
```
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
```
Upvotes: 0
|
2018/03/17
| 973 | 2,962 |
<issue_start>username_0: I have a json response like this,
```
"meteo_inverter_analysis":{
"Id93922.1":{
"inverter_id":"Id93922.1", "total_electricity_generated":1567.7910000000002
},
"Id93922.2":{
"inverter_id":"Id93922.2", "total_electricity_generated":1468.4869999999999
},
"Id93922.3":{
"inverter_id":"Id93922.3","total_electricity_generated":498.7319999999999
},
"Id93922.4":{
"inverter_id":"Id93922.4","total_electricity_generated":461.8369999999999
}
```
}
from that response i want to create an js array from total\_electricity\_generated. The array would be:
var array = [1567.7910000000002, 1468.4869999999999, 498.7319999999999, 461.8369999999999]
Can anyone help me how to do it? TIA<issue_comment>username_1: There are a few issues in your code that may result in low GPU usage.
1) Add a [`prefetch`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#prefetch) instruction at the end of your `Dataset` pipeline to enable the CPU to maintain a buffer of input data batches ready to move them to the GPU.
```
# this should be the last thing in your pipeline
dataset = dataset.prefetch(1)
```
2) You are using `feed_dict` to feed your model, along with `Dataset` iterators. This is not the intended way! [`feed_dict` is the slowest method of inputting data to your model and not recommended](https://www.tensorflow.org/api_guides/python/reading_data#Feeding). You should define your *model* in terms of the `next_element` outputs of the iterators.
Example:
```
next_x, next_y = iterator.get_next()
with tf.device('/GPU:0'):
conv1 = tf.layers.conv2d(next_x, filters=32, kernel_size=3,
strides=1, padding="SAME",
activation=tf.nn.relu, name="conv1")
# rest of model here...
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits,
labels=next_y)
```
Then you can call your training operation *without* using `feed_dict`, and the iterator will handle feeding data to your model behind the scenes. [Here is another related Q&A](https://stackoverflow.com/questions/49053569/tensorflow-create-minibatch-from-numpy-array-2-gb/49054290#49054290). Yout new training loop would look something like this:
```
while True:
try:
sess.run(training_op, feed_dict={training:True})
except tf.errors.OutOfRangeError:
break
```
You should only input data via `feed_dict` that your iterator does not provide, and these should typically be very lightweight.
For further tips on performance, you can refer to [this guide on TF website](https://www.tensorflow.org/versions/master/performance/datasets_performance).
Upvotes: 2 [selected_answer]<issue_comment>username_2: You could try the following code to see if tensorflow is recognizing your GPU:
```
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
```
Upvotes: 0
|
2018/03/17
| 1,894 | 6,552 |
<issue_start>username_0: New rails 5.1 application, installed bootstrap using yarn
```
yarn install boostrap
```
Now in my application.scss file I have this which includes the entire bootstrap css file currenly:
```
/*
*= require_tree .
*= require_self
*= require bootstrap/dist/css/bootstrap
*/
```
My layout file looks like this:
```
<%= stylesheet_link_tag 'application', media: 'all', 'data-turbolinks-track': 'reload' %>
<%= javascript_include_tag 'application', 'data-turbolinks-track': 'reload' %>
```
Everything works fine when running
>
> rails s
>
>
>
Now from what I understand to use webpack I have to change the includes to look like this:
```
<%= stylesheet_pack_tag 'application' %>
<%= javascript_pack_tag 'application' %>
```
Now when I run rails server I get an error:
```
Webpacker can't find application.css in /dev/myapp/public/packs/manifest.json. Possible causes:
1. You want to set webpacker.yml value of compile to true for your environment
unless you are using the `webpack -w` or the webpack-dev-server.
2. webpack has not yet re-run to reflect updates.
3. You have misconfigured Webpacker's config/webpacker.yml file.
4. Your webpack configuration is not creating a manifest.
Your manifest contains:
{
"application.js": "/packs/application-8d71e5035f8940a9e3d3.js",
"application.js.map": "/packs/application-8d71e5035f8940a9e3d3.js.map"
}
```
In another terminal I ran this, but it did't help:
```
./bin/webpack-dev-server
```
Am I suppose to run webpack-dev-server or is `rails s` enough?
How can I test that both my CSS and javascript setup is using webpack, can I start writing ES5/6 now with this setup?
Reference:
webpack.yml
```
# Note: You must restart bin/webpack-dev-server for changes to take effect
default: &default
source_path: app/javascript
source_entry_path: packs
public_output_path: packs
cache_path: tmp/cache/webpacker
# Additional paths webpack should lookup modules
# ['app/assets', 'engine/foo/app/assets']
resolved_paths: []
# Reload manifest.json on all requests so we reload latest compiled packs
cache_manifest: false
extensions:
- .js
- .sass
- .scss
- .css
- .module.sass
- .module.scss
- .module.css
- .png
- .svg
- .gif
- .jpeg
- .jpg
development:
<<: *default
compile: true
# Reference: https://webpack.js.org/configuration/dev-server/
dev_server:
https: false
host: localhost
port: 3035
public: localhost:3035
hmr: false
# Inline should be set to true if using HMR
inline: true
overlay: true
compress: true
disable_host_check: true
use_local_ip: false
quiet: false
headers:
'Access-Control-Allow-Origin': '*'
watch_options:
ignored: /node_modules/
test:
<<: *default
compile: true
# Compile test packs to a separate directory
public_output_path: packs-test
production:
<<: *default
# Production depends on precompilation of packs prior to booting for performance.
compile: false
# Cache manifest.json for performance
cache_manifest: true
```<issue_comment>username_1: You need to require css files located in `node_modules` from `app/javascripts/application.js` (or in any js files that get used by `app/javascripts/application.js`) rather than from `app/assets/stylesheets/application.scss` ([see here](https://github.com/rails/webpacker/blob/master/docs/css.md), or [see here](https://github.com/rails/webpacker/issues/257#issuecomment-293880589)).
[potentially inaccurate statement but that's how I can explain it] Because you don't have any css required in any files under `app/javascripts`, webpacker does not compile one `/packs/application.css` for you, so `stylesheet_pack_tag` breaks. It will once you do require some as explained
You could stick with your `app/assets/stylesheets/application.scss` (although idk how to require node\_modules css from there), but then you should not use `stylesheet_pack_tag` in your layout, rather the og `stylesheet_link_tag`
Knowing that, one javascript pack with a css import will results in a css pack :
```
app/
javascript/
css/
application.scss
welcome.scss
packs/
application.js
welcome.js
// in app/javascript/packs/application.js
import 'css/application.scss'
// in app/javascript/packs/welcome.js
import 'css/welcome.scss'
```
Upvotes: 1 <issue_comment>username_2: I did [configure an app](https://github.com/fabriziobertoglio1987/surfcheck) `bootstrap 4` with `yarn` and `rails 5.1`.
As my understanding your problem is not connected to `yarn` and `bootstrap`, also because I believe you need to require more `files` if you want to configure `bootstrap` with `yarn`, but I believe you have a problem with `Webpacker`.
```
Webpacker can't find application.css in /dev/myapp/public/packs/manifest.json
```
The first thing I notice, is that you do not have an `application.css` but **scss**, my second step was to search that error and find this [solution](https://stackoverflow.com/questions/46004615/stylesheet-pack-tag-not-finding-app-javascript-src-application-css-in-rails-5-1) (because I don't know `Webpacker`).
It says that you need to include in your `application.js` the following line
```
import "path to application.css"
```
but the question was not accepted so I would rely on the [`rails/webpacker` instructions](https://github.com/rails/webpacker#usage)
you should create an `application.css` in the folder `app/javascripts/src/application.css`
```
app/javascript:
├── packs:
│ # only webpack entry files here
│ └── application.js
└── src:
│ └── application.css
└── images:
└── logo.svg
```
then keep following the instructions from the above link
This is how I am requiring `bootstrap 4` with `yarn` in my [`application.scss`](https://github.com/fabriziobertoglio1987/surfcheck/blob/master/app/assets/stylesheets/application.scss)
```
@import 'bootstrap/scss/bootstrap';
@import 'main';
```
while my [`application.js`](https://github.com/fabriziobertoglio1987/surfcheck/blob/master/app/assets/javascripts/application.js) is using a custom `popper.js` file that you can copy (if you need)
```
//= require jquery/dist/jquery
//= require rails-ujs
//= require popper-custom
//= require bootstrap/dist/js/bootstrap
//= require main
//= require turbolinks
```
I would be happy to provide more help
Bootstrap 4 works with my project, so you can find some useful information there on how to configure yarn + Bootstrap 4
Upvotes: 0
|
2018/03/17
| 1,116 | 3,438 |
<issue_start>username_0: I have two tables, `Product` and `SalesProduct`.
`Product` table sample records (here `ProductID` is the primary key):
```
ProductID | ProductCode | Name
----------+-------------+-----------
1 P001 Computer
2 p002 Laptop
3 p003 Speaker
```
`SalesProduct` table sample records (here `ID` is the primary key, and `ProductID` is a foreign key referencing the `Products` table):
```
ID| SalesNo | ProductID
--+---------+-----------
1 S0001 1
2 S0002 2
3 S0003 3
4 S0004 1
5 S0005 2
6 S0006 3
7 S0007 1
8 S0008 2
9 S0009 3
```
When I write this query:
```
SELECT
SalesNo, SalesProduct.ProductID, Name
FROM
SalesProduct
JOIN
Product ON SalesProduct.ProductID = Product.ProductID
```
it will work fine and return this result:
```
SalesNo | ProductID | Name
--------+-----------+---------
S0001 1 Computer
S0002 2 Laptop
S0003 3 Speaker
S0004 1 Computer
S0005 2 Laptop
S0006 3 Speaker
S0007 1 Computer
S0008 2 Laptop
S0009 3 Speaker
```
BUT when I try to select the `TOP(3)` rows like this:
```
SELECT TOP(3)
SalesNo, SalesProduct.ProductID, Name
FROM
SalesProduct
JOIN
Product ON SalesProduct.ProductID = Product.ProductID
```
This query will return same product:
```
S0001 1 Computer
S0004 1 Computer
S0007 1 Computer
```
BUT I want this result:
```
S0001 1 Computer
S0002 2 Laptop
S0003 3 Speaker
```
What's wrong in the above query? And how can I get the expected output?<issue_comment>username_1: ```
SELECT TOP(3) sp.SalesNo, sp.ProductID, p.Name
FROM SalesProduct sp
INNER JOIN Product p ON SalesProduct.ProductID = Product.ProductID
ORDER BY sp.SalesNo
```
Upvotes: 0 <issue_comment>username_2: Please try this
```
DECLARE @Product TABLE (ProductId INT,ProductCode VARCHAR(10), Name VARCHAR(100))
INSERT INTO @Product(ProductId,ProductCode,Name)
(SELECT 1,'P001','Computer' UNION ALL SELECT 2,'P002','LAPTOP' UNION ALL SELECT 3,'P003','Speaker')
DECLARE @SalesProduct TABLE (Id INT,SalesNo VARCHAR(10),ProductId INT)
INSERT INTO @SalesProduct
SELECT 1,'S0001',1 UNION ALL SELECT 2,'S0002',2 UNION ALL SELECT 3,'S0003',3 UNION ALL SELECT 4,'S0004',1
UNION ALL SELECT 5,'S0005',2 UNION ALL SELECT 6,'S0006',3 UNION ALL SELECT 7,'S0007',1 UNION ALL SELECT 8,'S0008',2
UNION ALL SELECT 9,'S0009',3
SELECT TOP(3)
SP.SalesNo, SP.ProductID, P.Name
FROM
@SalesProduct SP
INNER JOIN
@Product P ON SP.ProductID = P.ProductID
ORDER BY SP.SalesNo
```
Upvotes: 0 <issue_comment>username_3: You are not getting the Expected output might be because you don't have an Order By Clause specified. You Can Do it either of the following
```
SELECT TOP 3
SalesNo, SalesProduct.ProductID, Name
FROM
SalesProduct
JOIN
Product ON SalesProduct.ProductID = Product.ProductID
ORDER BY SalesNo ASC
```
Or Like This
```
;WITH CTE
AS
(
SELECT
RN = ROW_NUMBER() OVER(ORDER BY SalesNo),
SalesNo, SalesProduct.ProductID, Name
FROM
SalesProduct
JOIN
Product ON SalesProduct.ProductID = Product.ProductID
)
SELECT
SalesNo, SalesProduct.ProductID, Name
WHERE RN<4
```
Upvotes: 1
|
2018/03/17
| 2,640 | 9,512 |
<issue_start>username_0: EDIT: I removed the `GROUP BY` clause from the example queries but the same problem shows "When I join table x to an empty/1 row table y MySQL makes a full table scan on table x in spite of I'm using limit"
---
Original Question:
I was trying to learn how to optimize my SQL queries and I encountered a behavior that I can't understand. having a schema like this
[SQL fiddle](http://sqlfiddle.com/#!9/bbc4a/2)
```
CREATE TABLE `country` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(45) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB ;
CREATE TABLE `school` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(45) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB ;
CREATE TABLE `users` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(45) DEFAULT NULL,
`country_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `fk_country_idx` (`country_id`),
CONSTRAINT `fk_users_country` FOREIGN KEY (`country_id`) REFERENCES `country` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB ;
CREATE TABLE `user_school_mm` (
`user_id` int(11) NOT NULL,
`school_id` int(11) NOT NULL,
PRIMARY KEY (`user_id`, `school_id`),
KEY `fk_user_school_mm_user_idx` (`user_id`),
KEY `fk_user_school_mm_school_idx` (`school_id`),
CONSTRAINT `fk_user_school_mm_user` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION,
CONSTRAINT `fk_user_school_mm_school` FOREIGN KEY (`school_id`) REFERENCES `school` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION
) ENGINE=InnoDB ;
INSERT INTO country (name) VALUES ('fooCountry1');
INSERT INTO school (name) VALUES ('fooSchool1'),('fooSchool2'),('fooSchool3');
INSERT INTO users (name, country_id) VALUES
('fooUser1',1),
('fooUser2',1),
('fooUser3',1),
('fooUser4',1),
('fooUser5',1),
('fooUser6',1),
('fooUser7',1),
('fooUser8',1),
('fooUser9',1),
('fooUser10',1)
;
INSERT INTO user_school_mm (user_id, school_id) VALUES
(1,1),(1,2),(1,3),
(2,1),(2,2),(2,3),
(3,1),(3,2),(3,3),
(4,1),(4,2),(4,3),
(5,1),(5,2),(5,3),
(6,1),(6,2),(6,3),
(7,1),(7,2),(7,3),
(8,1),(8,2),(8,3),
(9,1),(9,2),(9,3),
(10,1),(10,2),(10,3)
;
```
**QUERY 1 (Fast)**
```
-- GOOD QUERY (MySQL uses the limit and skip users table scan after 2 rows )
SELECT *
FROM
users LEFT JOIN
user_school_mm on users.id = user_school_mm.user_id
ORDER BY users.id ASC
LIMIT 2
-- takes about 100 milliseconds if users table is 3 million records
```
Explain
```
+---+-----------+---------------+------+-----------------------------------+----------+---------+---------------+------+-----------+
|id |select_type|table | type | possible_keys | key | key_len | ref | rows | Extra |
+---+-----------+---------------+------+-----------------------------------+----------+---------+---------------+------+-----------+
|1 |SIMPLE |users |index |PRIMARY,fk_country_idx | PRIMARY |4 | |2 | |
|1 |SIMPLE |user_school_mm |ref |PRIMARY,fk_user_school_mm_user_idx | PRIMARY |4 |tests.users.id |1 |Using index|
+---+-----------+---------------+------+-----------------------------------+----------+---------+---------------+------+-----------+
```
**QUERY 2 (Slow)**
```
-- BAD QUERY (MySQL ignores the limit and scanned the entire users table )
SELECT *
FROM
users LEFT JOIN
country on users.country_id = country.id
ORDER BY users.id ASC
LIMIT 2
-- takes about 9 seconds if users table is 3 million records
```
Explain
```
+---+-----------+--------+------+------------------------+-----+---------+-----+------+---------------------------------------------------+
|id |select_type|table | type | possible_keys | key | key_len | ref | rows | Extra |
+---+-----------+--------+------+------------------------+-----+---------+-----+------+---------------------------------------------------+
|1 |SIMPLE |users |ALL | PRIMARY,fk_country_idx | | | | 10 | Using temporary; Using filesort |
|1 |SIMPLE |country |ALL | PRIMARY | | | | 1 | Using where; Using join buffer (Block Nested Loop)|
+---+-----------+--------+------+------------------------+-----+---------+-----+------+---------------------------------------------------+
```
I don't understand what is going on behind the scenes, I thought If I used the primary key of the users table for the ordering and grouping, MySQL will take the first 2 rows of the users table and continue the joining, but it seems it didn't do that and scanned the entire table in query 2
**Why MySQL scanned the entire table in query2 while it takes only the first 2 rows in query1 ?**
*MySQL version is 5.6.38*<issue_comment>username_1: After some tests it turns out that if the second table(`user_school_mm`) has some data MySQL will not make full table scan on the first table, and if the second table(`country`) has no data/very little data (1 or 2 records) MySQL will do a full table scan. Why this happens? I don't know.
How to reproduce
1- Create a schema like this
```
CREATE TABLE `event` (
`ev_id` int(11) NOT NULL AUTO_INCREMENT,
`ev_note` varchar(255) DEFAULT NULL,
PRIMARY KEY (`ev_id`)
) ENGINE=InnoDB;
CREATE TABLE `table1` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(45) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB ;
CREATE TABLE `table2` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(45) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB ;
```
2- insert in the main table (`event` in this case) some data (I filled it with 35601000 rows)
3- leave table1 empty and insert 15 rows in table2
```
insert into table2 (name) values
('fooBar'),('fooBar'),('fooBar'),('fooBar'),('fooBar'),
('fooBar'),('fooBar'),('fooBar'),('fooBar'),('fooBar'),
('fooBar'),('fooBar'),('fooBar'),('fooBar'),('fooBar');
```
4- now join the main table with table2 and retest the same query with table1
**Query 1 (Fast)**
```
select *
from
event left join
table2 on event.ev_id = table2.id
order by event.ev_id
limit 2;
-- executed in 300 milliseconds measured by the client
```
Explain
```
+---+-----------+--------+------+----------------+--------+---------+------------------+------+--------+
|id |select_type|table | type | possible_keys | key | key_len | ref | rows | Extra |
+---+-----------+--------+------+----------------+--------+---------+------------------+------+--------+
|1 |SIMPLE |event |index | |PRIMARY |4 | | 2 | |
|1 |SIMPLE |table2 |eq_ref|PRIMARY |PRIMARY |4 |tests.event.ev_id | 1 | |
+---+-----------+--------+------+----------------+--------+---------+------------------+------+--------+
```
**Query 2 (Slow)**
```
select *
from
event left join
table1 on event.ev_id = table1.id
order by event.ev_id
limit 2;
-- executed in 79 seconds measured by the client
```
Explain
```
+---+-----------+--------+------+----------------+--------+---------+-------+---------+---------------------------------------------------+
|id |select_type|table | type | possible_keys | key | key_len | ref | rows | Extra |
+---+-----------+--------+------+----------------+--------+---------+-------+---------+---------------------------------------------------+
|1 |SIMPLE |event |ALL | | | | |33506704 | Using temporary; Using filesort |
|1 |SIMPLE |table1 |ALL |PRIMARY | | | |1 | Using where; Using join buffer (Block Nested Loop)|
+---+-----------+--------+------+----------------+--------+---------+-------+---------+---------------------------------------------------+
```
*MySQL version is 5.6.38*
Upvotes: 0 <issue_comment>username_2: The main reason is the misuse of `GROUP BY`. Let's take the first query. Even though it is "fast", it is still "wrong":
```
SELECT *
FROM users
LEFT JOIN user_school_mm on users.id = user_school_mm.user_id
GROUP BY users.id
ORDER BY users.id ASC
LIMIT 2
```
A user can go to two schools. The use of the many:many mapping `user_school_mm` claims that is a possibility. So, after doing the `JOIN`, you get 2 rows for a single user. But then, you `GROUP BY users.id`, to boil it down to a single row. But... Which of the two school\_id values should you use??
I am not going to try to address the performance issues until you present queries that make sense. At that point it will be easier to point out why one query performs better than another.
Upvotes: 2 <issue_comment>username_3: The MySQL optimizer will decide on join order/method first, and then check whether, for the chosen join order, it is possible to avoid sorting by using an index. For the slow query in this question, the optimizer has decided to use Block-Nested-Loop (BNL) join.
BNL is usually quicker than using an index when one of the tables is very small (and there is no LIMIT).
However, with BNL, rows will not necessarily come in the order given by the first table. Hence, the result of the join needs to be sorted before applying the LIMIT.
You can turn off BNL by `set optimizer_switch = 'block_nested_loop=off';`
Upvotes: 3 [selected_answer]
|
2018/03/17
| 418 | 1,390 |
<issue_start>username_0: For example, I have:
```
a = ["a","b","c","d"]
```
I want to create a function (a key) such that:
```
def myfunc(a_list_of_items):
# I have no idea what to do after this
a.sort(key=myfunc)
print(a)
```
Output should be:
```
["a","c","b","d"] #elements that had odd index -
# - stacked at one side and those with even stacked on the other
```<issue_comment>username_1: I would might use
```
[x for (_, x) in sorted(enumerate(a), key=lambda i: i[0] % 2)]
```
First, `enumerate` creates a list of index/value pairs from the original list. The given `key` function extracts the index and finds the remainder *modulo* 2; even indices are 0 and odd indices are 1, so that is sufficient to get the order you want. The list comprehension then extracts the original values from the sorted list of index/value pairs.
(For the given problem, concatenating two slices is certainly simpler, although possibly more costly. Benchmarking would determine which of the two approaches is faster/more efficient.)
Upvotes: 0 <issue_comment>username_2: How about the following (uses [Python's slice notation](https://stackoverflow.com/questions/509211/understanding-pythons-slice-notation)):
```
a[::2] + a[1::2]
```
This will result in:
```
['a', 'c', 'b', 'd']
```
Here, the `sorted()` function is not very appropriate in my opinion.
Upvotes: 3 [selected_answer]
|
2018/03/17
| 1,062 | 3,822 |
<issue_start>username_0: A demo of what I'm *expecting* to work is here: <https://jsfiddle.net/qe766xn0/4/>
The child component is firing an event that the parent component is not picking up on. According to [this documentation](https://v2.vuejs.org/v2/guide/components.html#Composing-Components) on the event relationship between a child and parent I would expect this to work. **What am I overlooking that's causing the lack of logs in my console?**
```
Vue.component('child-component', {
template: 'Child',
mounted() {
this.$emit('test');
},
methods: {
onClick: function() {
this.$emit('child-click-on');
}
}
});
var vm = new Vue({
el: '#app',
created: function () {
this.$on('test', function () {
console.log('test-on');
});
this.$on('child-click', function () {
console.log('child-click-on');
});
}
});
```<issue_comment>username_1: First, this code
```
```
means that the *root* Vue is listening for the child component to emit a `click` event, and that doesn't appear to be your intention. It seems like you really want to attach a click handler to the root div of your component, which has to be done in the component.
```
Vue.component('child-component', {
template: `
Child
`,
methods: {
onClick: function() {
this.$emit('child-click-on');
}
}
});
```
At that point, your component will be properly emitting the `child-click-on` event.
Second, you set up the listener like this:
```
this.$on('child-click', function () {
console.log('child-click-on');
});
```
Let's quickly move past the fact that it's listening to the wrong event (it should be listening to `child-click-on` because that is what is emitted) to the fact that by saying `this.$on` you are listening for the *parent* or root Vue to emit the event, and that never happens, because the event is emitted by the component. Therefore you need to listen to the event on the component. Idiomatically, this is done by adding the handler in the template:
```
```
There the root Vue is listening for the `child-click-on` event, at which point it will call the `onChildClick` method (which is just a method I defined for handing the event).
If, however, you were forced for some reason to set up the handler using `$on`, you would need to use a ref to get a reference to the child component so that you could set up the event listener.
```
```
And since refs are not available until the component is rendered, you would have to add the listener in the `mounted` event.
```
mounted: function () {
this.$refs.child.$on('child-click-on', function () {
console.log('child-click-on');
});
}
```
So, here is an example of the problem solved idiomatically.
```js
console.clear()
Vue.component('child-component', {
template: `
Child
`,
methods: {
onClick: function() {
this.$emit('child-click-on');
}
}
});
var vm = new Vue({
el: '#app',
methods: {
onChildClick() {
console.log("child-click-on")
}
}
});
```
```html
```
And here it is solved using a ref.
```js
Vue.component('child-component', {
template: `
Child
`,
methods: {
onClick: function() {
this.$emit('child-click-on');
}
}
});
var vm = new Vue({
el: '#app',
mounted: function () {
this.$refs.child.$on('child-click-on', function () {
console.log('child-click-on');
});
}
});
```
```html
```
Upvotes: 2 <issue_comment>username_2: @username_1 posted a great answer, however at the end of the day it still requires a lot of complexity. The solution I ended up finding is to fire all events in the root event namespace. You can do this by using `this.$root.$off()` and `this.$root.$on()`.
Upvotes: 1 [selected_answer]
|
2018/03/17
| 1,474 | 4,631 |
<issue_start>username_0: The following minimal CRUD example using Spring Boot, Hibernate, JpaRepository, CockroachDB and Kotlin produces `org.springframework.orm.jpa.JpaSystemException` / `org.hibernate.TransactionException`.
The Entity `Thing` in question has just two fields:
```
@Entity
data class Thing (
@Id
var id: Long,
var value: String
)
```
To keep this post short, I stored the actual souce files in gists:
[`./src/main/kotlin/ThingService.kt`](https://gist.github.com/Dobiasd/5a3c31b89f84aee8b7ee77af6ffbf3e8)
[`./src/main/resources/application.properties`](https://gist.github.com/Dobiasd/ccb0767bf1cc8f6cebe68fab56831dc2)
[`./build.gradle.kts`](https://gist.github.com/Dobiasd/34b45de7288a4ad09275bcf5e5be3517)
[`./stress_test.py`](https://gist.github.com/Dobiasd/187354a457bd8f742869b0294411ae55)
With these files, the issue can be reproduced (on Ubuntu 16.04 in my case) using the following commands.
Download and initialize CockroachDB:
```
# download
wget -qO- https://binaries.cockroachdb.com/cockroach-v1.1.5.linux-amd64.tgz | tar xvz
# start
./cockroach-v1.1.5.linux-amd64/cockroach start --insecure
# leave terminal open in background
# init
cockroach sql --insecure -e "CREATE USER root WITH PASSWORD '123';"
cockroach sql --insecure -e "CREATE DATABASE things_db;"
cockroach sql --insecure -e "GRANT ALL ON DATABASE things_db TO root;"
```
Run data service:
```
gradle bootRun
# leave terminal open in background
```
Run stress test:
```
python3 stress_test.py
```
`stress_test.py` concurrently sends `PUT` requests and `GET` requests (find thing by value) to the service. Most requests work fine, but in between the output looks as follows:
```
PUT OK
find OK
PUT OK
find OK
find OK
find OK
PUT ERROR: {"timestamp":"2018-03-17T16:00:24.616+0000","status":500,"error":"Internal Server Error","message":"Unable to commit against JDBC Connection; nested exception is org.hibernate.TransactionException: Unable to commit against JDBC Connection","path":"/thing/"}
find OK
PUT OK
```
[Logs of the Spring application](https://gist.github.com/Dobiasd/117d853ba2d920dd007a64076fa2f7bd) shows more details:
```
2018-03-17 17:00:24.615 ERROR 3547 --- [nio-8082-exec-6] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.orm.jpa.JpaSystemException: Unable to commit against JDBC Connection; nested exception is org.hibernate.TransactionException: Unable to commit against JDBC Connection] with root cause
org.postgresql.util.PSQLException: ERROR: restart transaction: HandledRetryableTxnError: TransactionRetryError: retry txn (RETRY_SERIALIZABLE): "sql txn" id=1cb57665 key=/Table/51/1/11125601/0 rw=true pri=0.04354217 iso=SERIALIZABLE stat=PENDING epo=0 ts=1521302424.604752770,1 orig=1521302424.604725980,0 max=1521302424.604725980,0 wto=false rop=false seq=3
```
There are no concurrent writes going on. All writes are strictly sequential. The problem only occurs when the concurrent reads come into play. However I think this should not cause the need to retry any transactions.
Is there something wrong with the configuration of my database connection, or what could be the issue?<issue_comment>username_1: The HandledRetryableTxnError indicates that the [transaction should be retried](https://www.cockroachlabs.com/docs/stable/transactions.html#transaction-retries). This occurs at `SERIALIZABLE` isolation levels when conflicts are detected between transactions.
Cockroach performs some retries automatically but not all, requiring client involvement.
You can find client-side retry examples in multiple languages on the [docs site](https://www.cockroachlabs.com/docs/stable/build-an-app-with-cockroachdb.html), including [java](https://www.cockroachlabs.com/docs/stable/build-a-java-app-with-cockroachdb.html#transaction-with-retry-logic).
Upvotes: 3 [selected_answer]<issue_comment>username_2: Solved it by adding `org.springframework.retry` as a dependency (`org.springframework.retry:spring-retry:1.2.2.RELEASE`), importing the needed annitations
```
import org.springframework.retry.annotation.EnableRetry
import org.springframework.retry.annotation.Retryable
```
and replacing
```
@SpringBootApplication
```
with
```
@SpringBootApplication
@EnableRetry
```
and
```
@PutMapping("/thing/")
```
with
```
@PutMapping("/thing/")
@Retryable
```
I don't understand why this is needed at all when only doing strictly sequential writes (with concurrent reads), but at least it seems to work.
Upvotes: 1
|
2018/03/17
| 1,552 | 6,250 |
<issue_start>username_0: I have a Parent Component with an HTML click event that passes the clicked element to a method on the component.ts file, I would like this click event to be routed to Services, made into a new `Subject`, then using the `next()` method, pass the Subject to a different Sibling Component and bind the data to the Sibling Component's HTML.
So the routing of this data will look like this:
*Parent Component(via click event)* --> Service(via method on Parent Component) --> Sibling Component(via Service)\*
***Here is where my data passing starts:***
app.component.ts
```
import { Component, OnInit } from '@angular/core';
import { Router } from '@angular/router';
import { ApiService } from '../api.service';
@Component({
selector: 'app-contacts-list',
templateUrl: './contacts-list.component.html',
styleUrls: ['./contacts-list.component.scss']
})
export class ContactsListComponent implements OnInit {
sortedFavorites: any[] = [];
sortedContacts: any[] = [];
constructor (private _apiService: ApiService, private router: Router) {}
ngOnInit(){ this.getContacts()}
getContacts() {
this._apiService.getContacts()
.subscribe(
(contacts) => {
//Sort JSON Object Alphabetically
contacts.sort( (a, b) => {
if (a.name > b.name) return 1;
if (a.name < b.name) return -1;
return 0;
});
//Build new Sorted Arrays
contacts.forEach( (item) => {
if (item.isFavorite) {
this.sortedFavorites.push(item);
} else {
this.sortedContacts.push(item);
}
});
});
}
openFavorite($event, i) {<--HTML click event passing 'i' in as object clicked
let selectedFavorite = this.sortedFavorites[i];
this._apiService.openSelectedContact(selectedFavorite); <--passing said object into method connected to my services.ts file
this.router.navigate(['/details']);
};
}
```
The data that I am passing with the `openFavorite()` method I created is working because doing a `console.log(selectedFavorite)` logs the desired result to be passed.
***Then it comes over to services***
app.service.ts:
```
import { Injectable } from '@angular/core';
import { Http, Response } from '@angular/http';
import { Observable } from 'rxjs/Observable';
import { Subject } from 'rxjs/Subject';
import 'rxjs/add/operator/map';
@Injectable()
export class ApiService {
//API URL
private url: string = 'assets/api/contacts.json';
//Create new Subject and assign to local variable
public newContactSubject = new Subject();
//Initialize HttpClient for request
constructor(private \_http: Http) { }
//Pull JSON data from REST API
getContacts(): Observable {
return this.\_http.get(this.url)
.map((response: Response) => response.json());
}
openSelectedContact(data) {
this.newContactSubject.next(data); <---Where data should be passing in!
}
}
```
\*\*Now I would like my other component to receive the data from app.service.
```
import { Component, OnInit } from '@angular/core';
import { ContactsListComponent } from './app/contacts-list/contacts-list.component';
import { ApiService } from '../api.service';
@Component({
selector: 'app-contact-details',
templateUrl: './contact-details.component.html',
styleUrls: ['./contact-details.component.scss']
})
export class ContactDetailsComponent implements OnInit {
selectedContact: any[] = [];
error: string;
constructor(private _apiService: ApiService) { }
ngOnInit() { this.showContact() }
showContact() {
this._apiService.newContactSubject.subscribe(
data => this.selectedContact = data) <--Where the data should be showing up from services.ts file
console.log(this.selectedContact.name); <-- This is logging Undefined
}
}
```
What would I be missing here? Thanks so much in advance!<issue_comment>username_1: Try this:
```
showContact() {
this._apiService.newContactSubject.subscribe(
data => {
this.selectedContact = data;
console.log(this.selectedContact.name);
}
}
```
Both lines of code (including your logging) are within the function passed into the subscribe. Each time an item is emitted, it only runs the code *within* the callback function.
And as a side note ... it is normally recommended that you make your subject private and only expose it's read-only observable using code something like this:
```
private selectedMovieSource = new Subject();
selectedMovieChanges$ = this.selectedMovieSource.asObservable();
```
Notice that the subject is private and it's observable is exposed using a separate property. The components then subscribe to the public observable of the subject.
Upvotes: 2 [selected_answer]<issue_comment>username_2: first of all, your component's sort method will never sort, because you're ignoring the return value. If you wanna deal with sorting you should use like this `concatcts = contacts.sort(...)`
and i suggest you an another pattern:
```
import { Component, OnInit } from '@angular/core';
import { ContactsListComponent } from './app/contacts-list/contacts-list.component';
import { ApiService } from '../api.service';
import { OnDestroy } from "@angular/core";
import { ISubscription } from "rxjs/Subscription";
@Component({
selector: 'app-contact-details',
templateUrl: './contact-details.component.html',
styleUrls: ['./contact-details.component.scss']
})
export class ContactDetailsComponent implements OnInit, OnDestroy {
selectedContact: any[] = [];
error: string;
private subscription: ISubscription;
constructor(private _apiService: ApiService) { }
ngOnInit() {
this.subscription = this._apiService.newContactSubject().subscribe(data => {
this.selectedContact = data;
console.log(this.selectedContact.name);
});
}
showContact() {
this._apiService.newContactSubject.subscribe();
}
ngOnDestroy() {
this.subscription.unsubscribe();
}
}
```
But I've found have an another issue: you have defined your `selectedContact` as an *any* object array then you wanna reach the value as an object: `this.selectedContact.name` I hope you can solve this problem :)
Upvotes: 0
|
2018/03/17
| 262 | 989 |
<issue_start>username_0: I am trying to simulate poor network connectivity.
I set Android Studio's emulator Cellular to Signal strength = None, but it is still allowing network traffic to go through (I am making successful HTTP requests using Okhttp after setting "None" in the emulator)
Is this a bug?
[](https://i.stack.imgur.com/LZor1.png)<issue_comment>username_1: You may need to set 'Network type'.
I think this is a bug... so confused about it.I mean 'how Cellular data works without signal strength???'
Upvotes: 1 <issue_comment>username_2: It seems to be a bug - I also stumbled upon the problem and found this bug-report:
<https://issuetracker.google.com/issues/136937549>
Interestingly switching to roaming there switches off the network for me (what I mainly needed - maybe it helps you too)
Otherwise staring the issue can help getting a fix sooner as google then knows it is annoying to devs.
Upvotes: 2
|
2018/03/17
| 974 | 2,423 |
<issue_start>username_0: I have a frindly(not so much but fits me well) URL expression on my HTACCESS.
The page treats accordly with "mode" and ids
For editing a category:
```
#URL
edit.php?mode=cat&cat=1
#The expression
RewriteRule ^editar/([^/]*)/([^/]*)$ /edit.php?mode=$1&cat=$2 [NC,L]
#Output
/editar/cat/1
```
Editing subcategory:
```
#URL
edit.php?mode=subcat&cat=1&subcat=1
#The expression
RewriteRule ^editar/([^/]*)/([^/]*)/([^/]*)$ /edit.php?mode=$1&cat=$2&subcat=$3 [NC,L]
#Output
/editar/subcat/1/1
```
Editing product:
```
#URL
edit.php?mode=prod&cat=1&subcat=1∏=1
#The expression
RewriteRule ^editar/([^/]*)/([^/]*)/([^/]*)/([^/]*)$ /edit.php?mode=$1&cat=$2&subcat=$3∏=$4 [NC,L]
#Output
/editar/prod/1/1/1
```
So far, so good!
But I have three expressions for the same page.
```
RewriteRule ^editar/([^/]*)/([^/]*)$ /edit.php?mode=$1&cat=$2 [NC,L]
RewriteRule ^editar/([^/]*)/([^/]*)/([^/]*)$ /edit.php?mode=$1&cat=$2&subcat=$3 [NC,L]
RewriteRule ^editar/([^/]*)/([^/]*)/([^/]*)/([^/]*)$ /edit.php?mode=$1&cat=$2&subcat=$3∏=$4 [NC,L]
```
Is there any way to reduce that to just ONE expression, where some varible might be or might no be present on the string?
Like: edit.php?mode=prod&cat=1 THIS -> &subcat=1 AND THIS -> ∏=1 being optional?
Thanks!<issue_comment>username_1: You can make the 3rd and 4th capture-groups optional in your pattern.
```
RewriteRule ^editar/([^/]*)/([^/]*)/?([^/]*)/?([^/]*)/?$ /edit.php?mode=$1&cat=$2subcat=$3∏=$4 [NC,L]
```
But doing this, you will get empty `subcat` and `product` value if the uri is `/editar/foo/bar` .
Upvotes: 0 <issue_comment>username_2: Try this :
```
RewriteEngine On
RewriteRule ^editar/([^/]+)/([^/]+)/?([^/]*)/?([^/]*)?$ - [E=PASS:mode=$1&cat=$2&subcat=$3∏=$4]
RewriteCond %{ENV:PASS} ^((mode=(.*))&subcat=∏=|(mode=(.*)&subcat=(.+))∏=|(mode=(.*)&subcat=(.+)∏=(.+)))$
RewriteRule ^ /edit.php?%2%4%7 [L,NE]
```
So , with the code above you will be able to handle requests like the following :
```
/editar/1 or editar/1/ = not match
/editar/1/2 or editar/1/2/ = match and redirect to /edit.php?mode=1&cat=2
/editar/1/2/3 or editar/1/2/3/ = match and redirect to /edit.php?mode=1&cat=2&subcat=3
/editar/1/2/3/4 = match and redirect to /edit.php?mode=1&cat=2&subcat=3∏=4
/editar/1/2/3/4/ = not match
```
**Note:** clear browser cache then test it .
Upvotes: 2 [selected_answer]
|
2018/03/17
| 1,059 | 4,363 |
<issue_start>username_0: This is a database model design question. Let's say I'm designing an app like Slack. Slack has multiple Organizations, and within each Organization there are objects that only should be accessed by that Organization (eg. its chat records, files, etc.). What is the best way to set up these per-Organization objects in Django?
A simple solution is to attach a ForeignKey to every one of these objects. Like so:
```
class Organization(models.Model):
# ...
class File(models.Model):
organization = models.ForeignKey(
'Organization',
on_delete=models.CASCADE,
)
# ...
class ChatThread(models.Model):
organization = models.ForeignKey(
'Organization',
on_delete=models.CASCADE,
)
# ...
```
But if we do it like this, we need to put an index on `organization`, and since there are many such per-Organization objects, it seems a little wasteful.
Is there a cleaner way to design this?<issue_comment>username_1: For me the true question is :
Did your File Models always need Organisation ?
Same question for ChatThread Models ?
If not then, you shouldn't use ForeignKey, but ManyToManyField.
And then you avoid Models dependencies.
Else, you models are correct.
Upvotes: 0 <issue_comment>username_2: Your method is clean, if each `Organization` can share the same `File` then you should use `models.ManytoManyField`, but I doubt Slack works like this.
On top of that, using Slack you can access files from anywhere but each file you share can't always be published just on every thread.
The model structure you offer seems like the best for what you're trying to achieve, if you happen to have people in each organization using your app then you might want to design a new model for each person.
Here's what I'd go with :
```
class Organization(models.Model):
# ...
#each person is part of an organization
class Person(models.Model):
organization = models.ForeignKey('Organization', on_delete=models.CASCADE)
# ...
#each file is part of an organization
class File(models.Model):
organization = models.ForeignKey('Organization', on_delete=models.CASCADE)
# ...
#each thread is part an Organization
#each thread can have many users, each user can join many thread.
#each thread can have many files, each file can be shared across one or many thread
class ChatThread(models.Model):
organization = models.ForeignKey('Organization', on_delete=models.CASCADE)
people = models.ManyToManyField(Person, ...)
files = models.ManyToManyField(File, ...)
# ...
```
Upvotes: 2 <issue_comment>username_3: I think your method is about as good as it would need to be. In terms of indexing the organization column, you can use `db_index=False` to disable the creation of an index.
If you want to abstract the organization field and have some methods available on all organization objects, you could use an abstract model like so:
```
class Organization(models.Model):
# ...
class OrganizationModel(models.Model):
organization = models.ForeignKey(
'Organization',
on_delete=models.CASCADE,
db_index=False,
)
class Meta:
abstract = True
class File(OrganizationModel):
# ...
class ChatThread(OrganizationModel):
# ...
```
Upvotes: 3 <issue_comment>username_4: I'm not sure about that, but if you are designing an app like Slack i think you need design it using Tenants, for example: customer.domain.com, it's better to organize the information.
In your base organization you can do it like this example:
```
# The Organization Model
class Organization(models.Model):
# ...
#each person is part of an organization
class Person(models.Model):
organization = models.ForeignKey('Organization', on_delete=models.CASCADE)
# ...
#each file is part of an organization
class File(models.Model):
organization = models.ForeignKey('Organization', on_delete=models.CASCADE)
# ...
#each thread is part an Organization
#each thread can have many users, each user can join many thread.
#each thread can have many files, each file can be shared across one or many thread
class ChatThread(models.Model):
organization = models.ForeignKey('Organization', on_delete=models.CASCADE)
people = models.ManyToManyField(Person, ...)
files = models.ManyToManyField(File, ...)
# ...
```
Upvotes: 0
|
2018/03/17
| 942 | 3,803 |
<issue_start>username_0: I am trying to create code that takes an integer 'a' and a list and returns a list of every value in the list that is less than a. I have created code that will figure out if the first number in the list is less than 'a' but I can't quite figure out the recursion. Any help would be great!
```
fun smallethan(a,[]) = [] | smallerthan(a,list) = if hd(list) < a then [hd(list)];
```<issue_comment>username_1: For me the true question is :
Did your File Models always need Organisation ?
Same question for ChatThread Models ?
If not then, you shouldn't use ForeignKey, but ManyToManyField.
And then you avoid Models dependencies.
Else, you models are correct.
Upvotes: 0 <issue_comment>username_2: Your method is clean, if each `Organization` can share the same `File` then you should use `models.ManytoManyField`, but I doubt Slack works like this.
On top of that, using Slack you can access files from anywhere but each file you share can't always be published just on every thread.
The model structure you offer seems like the best for what you're trying to achieve, if you happen to have people in each organization using your app then you might want to design a new model for each person.
Here's what I'd go with :
```
class Organization(models.Model):
# ...
#each person is part of an organization
class Person(models.Model):
organization = models.ForeignKey('Organization', on_delete=models.CASCADE)
# ...
#each file is part of an organization
class File(models.Model):
organization = models.ForeignKey('Organization', on_delete=models.CASCADE)
# ...
#each thread is part an Organization
#each thread can have many users, each user can join many thread.
#each thread can have many files, each file can be shared across one or many thread
class ChatThread(models.Model):
organization = models.ForeignKey('Organization', on_delete=models.CASCADE)
people = models.ManyToManyField(Person, ...)
files = models.ManyToManyField(File, ...)
# ...
```
Upvotes: 2 <issue_comment>username_3: I think your method is about as good as it would need to be. In terms of indexing the organization column, you can use `db_index=False` to disable the creation of an index.
If you want to abstract the organization field and have some methods available on all organization objects, you could use an abstract model like so:
```
class Organization(models.Model):
# ...
class OrganizationModel(models.Model):
organization = models.ForeignKey(
'Organization',
on_delete=models.CASCADE,
db_index=False,
)
class Meta:
abstract = True
class File(OrganizationModel):
# ...
class ChatThread(OrganizationModel):
# ...
```
Upvotes: 3 <issue_comment>username_4: I'm not sure about that, but if you are designing an app like Slack i think you need design it using Tenants, for example: customer.domain.com, it's better to organize the information.
In your base organization you can do it like this example:
```
# The Organization Model
class Organization(models.Model):
# ...
#each person is part of an organization
class Person(models.Model):
organization = models.ForeignKey('Organization', on_delete=models.CASCADE)
# ...
#each file is part of an organization
class File(models.Model):
organization = models.ForeignKey('Organization', on_delete=models.CASCADE)
# ...
#each thread is part an Organization
#each thread can have many users, each user can join many thread.
#each thread can have many files, each file can be shared across one or many thread
class ChatThread(models.Model):
organization = models.ForeignKey('Organization', on_delete=models.CASCADE)
people = models.ManyToManyField(Person, ...)
files = models.ManyToManyField(File, ...)
# ...
```
Upvotes: 0
|
2018/03/17
| 320 | 1,088 |
<issue_start>username_0: I am trying to migrate my VueJS project from VueCLI 2 to 3.
All the files are copied over to the src folder.
When I try to view it in the browser using npm run serve then I get this HTML with no present:
```
Webpack App
```
The console gives this error:
```
vue.runtime.esm.js?ff9b:587 [Vue warn]: Cannot find element: #app
```
Where does the HTML get rendered from? In the previous version of VueCLI there was an index.html file. Now that's missing?
In my main.js I do the same as what's in the boiler plate when initializing a new priject:
```
new Vue({
i18n,
router,
store,
render: h => h(App)
}).$mount('#app')
```<issue_comment>username_1: I think you need to create a place in the DOM for the rendering to happen. In this case the application is looking for an element with the id of 'app'.
```
```
Upvotes: 0 <issue_comment>username_2: I figured it out! The new index.html isn't in the root anymore. In VueCli 3 this is in the `/public` folder!
So I had to move /index.html to /public/index.html.
Upvotes: 3 [selected_answer]
|
2018/03/17
| 1,451 | 6,333 |
<issue_start>username_0: I have a problem , i'm trying to create an app for my website and i'm using android webview to do that, but since there's external links (ads, backlinks.. etc) in my website , when the user clicks these links , the webview opens the links and my app acts like a browser , i know that's what webview is but i want it to open only links of my website
i ovverided the shouldOverrideUrlLoading method and i intercepted the urls
and i returned true if the urls are diffrent than my website prefix , but the webview goes all white when i click and external link , and want for the webview is stay the same when i click the external links
here's my code
```
public boolean shouldOverrideUrlLoading(WebView view, String url) {
String host = Uri.parse(url).getHost();
urlData = url;
if (target_url_prefix.equals(host)) {
if (mWebviewPop != null) {
mWebviewPop.setVisibility(View.GONE);
mContainer.removeView(mWebviewPop);
mWebviewPop = null;
}
return false;
}
if(!url.contains(getString(R.string.target_url))) {
Log.d("intercept External URL", "true");
return true;
}
}
```<issue_comment>username_1: You do not have to write `shouldOverrideUrlLoading` method. Only load the url in `onCreate()` method.
Check this code.
**MainActivity.this**.
```
public class MainActivity extends AppCompatActivity
{
private static final String PAGE_URL = "http://madarabia.com";
private NoInternetDialog noInternetDialog;
private WebView mWebView;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
getWindow().addFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN);
setContentView(R.layout.activity_main);
noInternetDialog = new NoInternetDialog.Builder(getApplicationContext()).build();
mWebView = findViewById(R.id.webview);
// Force links and redirects to open in the WebView instead of in a browser
mWebView.setWebChromeClient(new WebChromeClient());
mWebView.getSettings().setJavaScriptEnabled(true);
mWebView.getSettings().setDomStorageEnabled(true);
mWebView.setWebViewClient(new WebViewClient());
mWebView.getSettings().setSaveFormData(true);
mWebView.getSettings().setAllowContentAccess(true);
mWebView.getSettings().setAllowFileAccess(true);
mWebView.getSettings().setAllowFileAccessFromFileURLs(true);
mWebView.getSettings().setAllowUniversalAccessFromFileURLs(true);
mWebView.getSettings().setSupportZoom(false);
mWebView.setClickable(true);
// Use remote resource
mWebView.postDelayed(new Runnable() {
@Override
public void run() {
mWebView.loadUrl(PAGE_URL);
}
}, 500);
mWebView.onCheckIsTextEditor();
mWebView.requestFocus(View.FOCUS_DOWN);
mWebView.setOnTouchListener(new View.OnTouchListener() {
@Override
public boolean onTouch(View v, MotionEvent event) {
switch (event.getAction())
{
case MotionEvent.ACTION_DOWN:
case MotionEvent.ACTION_UP:
if (!v.hasFocus())
{
v.requestFocus();
}
break;
}
return false;
}
});
}
@Override
protected void onDestroy() {
super.onDestroy();
noInternetDialog.onDestroy();
}
// Prevent the back-button from closing the app
@Override
public void onBackPressed() {
if(mWebView.canGoBack()) {
mWebView.goBack();
} else {
super.onBackPressed();
}
}
@Override
public boolean onKeyDown(int keyCode, KeyEvent event) {
if ((keyCode == KeyEvent.KEYCODE_BACK) && mWebView.canGoBack()) {
mWebView.goBack();
//If there is history, then the canGoBack method will return ‘true’//
return true;
}
//If the button that’s been pressed wasn’t the ‘Back’ button, or there’s currently no
//WebView history, then the system should resort to its default behavior and return
//the user to the previous Activity//
return super.onKeyDown(keyCode, event);
}
}
```
Upvotes: 0 <issue_comment>username_2: In both cases you have consumed the event. try something like below .
```
webView.setWebViewClient(new WebViewClient(){
@Override
public boolean shouldOverrideUrlLoading(WebView view, WebResourceRequest request) {
if (request.getUrl().equals(host)) {
// Intercept URL Load url here return true if url is consumed
return true;
}
return super.shouldOverrideUrlLoading(view, request);
}
});
```
Or if you want to block all other links then you can use it like below .
```
webView.setWebViewClient(new WebViewClient(){
@Override
public boolean shouldOverrideUrlLoading(WebView view, WebResourceRequest request) {
if (request.getUrl().equals(yourwebsite)) {
return super.shouldOverrideUrlLoading(view, request);
}
return true;
}
});
```
Keep that in mind that all other links will not work so this can be a bad impression on your app . So i suggest that you should open other links with a browser intent . Like below.
```
webView.setWebViewClient(new WebViewClient(){
@Override
public boolean shouldOverrideUrlLoading(WebView view, WebResourceRequest request) {
if (request.getUrl().equals(yourwebsite)) {
return super.shouldOverrideUrlLoading(view, request);
}else{
try {
Intent browserIntent = new Intent(Intent.ACTION_VIEW, Uri.parse(request.getUrl()));
startActivity(browserIntent);
}catch (Exception e){
e.printStackTrace();
}
}
return true;
}
});
```
**NOTE**:- This is implementation for `shouldOverrideUrlLoading(WebView view, WebResourceRequest request)` which is applicable above API 21 . So you should also override `shouldOverrideUrlLoading(WebView view, String url)` for previous version in same way .
Upvotes: 3 [selected_answer]
|
2018/03/17
| 492 | 1,762 |
<issue_start>username_0: I've just stumbled across the following syntax in TypeScript
```
export interface OrderPool {
[id: string]: Level3Order;
}
```
Could someone clarify what I am looking at?
Best I can understand is that this is an interface `OrderPool` which contains a property named Id of type `string(array?)` and something of type `Level3Order`????
What is the relationship of `Level3Order` to the property Id and is Id an array or single instance?<issue_comment>username_1: What it means is simply, "If you index into something of type *OrderPool*, you get back something of type *Level3Order*."
The type of the signature should always be *string or number*. The name of the parameter ("id") is immaterial.
Take a look at the example below to get a complete idea.
```
class Level3Order{
public dummy : number = 0;
}
export interface OrderPool {
[id :string]: Level3Order;
}
let pool : OrderPool ={}
pool["test"] = 5; //Error number is not assignable to Level3Order
pool["test"] = new Level3Order(); //Ok
pool["whatever"] = new Level3Order(); //Ok
pool.whatever = new Level3Order(); //Still ok
pool["test"].dummy = 5; // Dummy is a property on Level3Order, Ok
```
Upvotes: 1 <issue_comment>username_2: that means Objects that implement the interface OrderPool contain key/value pairs, where the key (called id in this case) is of type string and the value is of type Level3Order
for example this objects correctly implements the interface:
```
{
'item1': new Level3Order,
'anotherItem': new Level3Order
}
```
You could also have something like
```
export interface OrderPool {
[id: number]: Level3Order;
}
```
example:
```
{
1: new Level3Order(),
5: new Level3Order()
}
```
Upvotes: 4 [selected_answer]
|
2018/03/17
| 342 | 1,277 |
<issue_start>username_0: I'm trying to build the Flutter sample project on VS code, but got this messages... The sample is building and running fine on Android device, and I also tried running the sample with Android Studio on iOS simulator which is also fine. It just doesn't work on VS Code. Has anyone meet the same issue?
**Flutter: v0.2.2**
```
Launching lib/main.dart on iPhone 8 in debug mode...
Xcode build done
Failed to build iOS app
Error output from Xcode build:
↳
** BUILD FAILED **
Xcode's output:
↳
=== BUILD TARGET Runner OF PROJECT Runner WITH CONFIGURATION Debug ===
/Users/Sean/Documents/myapp/build/ios/Debug-iphonesimulator/Runner.app:resource fork, Finder information, or similar detritus not allowed
Command /usr/bin/codesign failed with exit code 1
Could not build the application for the simulator.
Error launching application on iPhone 8.
Exited (sigterm)
```<issue_comment>username_1: I suspect you don't have XCode installed, or have enabled your Runner project for your developer ID.
Upvotes: 0 <issue_comment>username_2: This is a new security measure in the Xcode build system.
You can go into details here : <https://stackoverflow.com/a/39667628/6622328>
For a quick fix, just clean the app before rebuilding it (flutter clean).
Upvotes: 2
|
2018/03/17
| 387 | 1,507 |
<issue_start>username_0: I am working on some project in Unity. I have:
```
[Serializable]
public class ItemAction
{
[SerializeField]
private UnityEvent unityEvent;
public void Perform()
{
unityEvent.Invoke();
}
}
[Serializable]
public class ItemAction
{
[SerializeField]
private UnityEvent unityEvent;
public void Perform(T parameter)
{
unityEvent.Invoke(parameter);
}
}
```
Also I have this class:
```
public abstract class Item : MonoBehaviour
{
[SerializeField]
private float weight;
[SerializeField]
private [collection of item actions] actions;
public abstract void Use();
public abstract void Grab(Transform transform);
public abstract void Drop();
}
```
How to create collection with mixed both generic and non-generic ItemAction instances (so some actions may require some parameters)?
For example:
For unequipped weapons, I can only grab them.
For unequipped medkits, I can grab them or use them immediately.
For triggers/switchers, I can only use them.
I could probably use an empty interface, but I don't think it's good solution...<issue_comment>username_1: I suspect you don't have XCode installed, or have enabled your Runner project for your developer ID.
Upvotes: 0 <issue_comment>username_2: This is a new security measure in the Xcode build system.
You can go into details here : <https://stackoverflow.com/a/39667628/6622328>
For a quick fix, just clean the app before rebuilding it (flutter clean).
Upvotes: 2
|
2018/03/17
| 1,811 | 7,409 |
<issue_start>username_0: Given below is a java class using Bill Pugh singleton solution.
```
public class Singleton {
int nonVolatileVariable;
private static class SingletonHelper {
private static Singleton INSTANCE = new Singleton();
}
private Singleton() { }
public static Singleton getInstance() {
return SingletonHelper.INSTANCE;
}
public int getNonVolatileVariable() {
return nonVolatileVariable;
}
public void setNonVolatileVariable(int nonVolatileVariable) {
this.nonVolatileVariable= nonVolatileVariable;
}
}
```
I have read in many places that this approach is thread safe. So if I understand it correctly, then the singleton instance is created only once and all threads accessing the `getInstance` method will receive the same instance of the class `Singleton`. However I was wondering if the threads can locally cache the obtained singleton object. Can they? If yes then doesn't that would mean that each thread can change the instance field `nonVolatileVariable` to different values which might create problems.
I know that there are other singleton creation methods such as enum singleton, but I am particularly interested in this singleton creation method.
So my question is that, is there a need to use the volatile keyword like
`int volatile nonVolatileVariable;` to make sure that the singleton using this approach is truely thread safe? Or is it already truly thread safe? If so how?<issue_comment>username_1: >
> So my question is that, is there a need to use the volatile keyword
> like int volatile nonVolatileVariable; to make sure that the
> singleton using this approach is truly thread safe? Or is it already
> truly thread safe? If so how?
>
>
>
The singleton pattern ensures that a single instance of the class is created. It doesn't ensures that fields and methods be thread-safe and volatile doesn't ensure it either.
>
> However I was wondering if the threads can locally cache the obtained
> singleton object ?
>
>
>
According to the memory model in Java, yes they can.
>
> if yes then doesn't that would mean that each thread can change the
> instance field nonVolatileVariable to different values which might
> create problems.
>
>
>
Indeed but you would still a problem of consistency with a `volatile` variable because `volatile` handles the memory visibility question but it doesn't handle the synchronization between threads.
Try the following code where multiple threads increment a volatile `int` `100` times.
You will see that you could not get `100` at each time as result.
```
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
public class Singleton {
volatile int volatileInt;
private static class SingletonHelper {
private static Singleton INSTANCE = new Singleton();
}
private Singleton() {
}
public static Singleton getInstance() {
return SingletonHelper.INSTANCE;
}
public int getVolatileInt() {
return volatileInt;
}
public void setVolatileInt(int volatileInt ) {
this.volatileInt = volatileInt ;
}
public static void main(String[] args) throws InterruptedException {
ExecutorService executorService = Executors.newFixedThreadPool(5);
List> callables = IntStream.range(0, 100)
.mapToObj(i -> {
Callable callable = () -> {
Singleton.getInstance().setVolatileInt(Singleton.getInstance().getVolatileInt()+1);
return null;
};
return callable;
})
.collect(Collectors.toList());
executorService.invokeAll(callables);
System.out.println(Singleton.getInstance().getVolatileInt());
}
}
```
To ensure that each thread takes into consideration other invocations, you have to use external synchronization and in this case make the variable volatile is not required.
For example :
```
synchronized (Singleton.getInstance()) {
Singleton.getInstance()
.setVolatileInt(Singleton.getInstance().getVolatileInt() + 1);
}
```
And in this case, volatile is not required any longer.
Upvotes: 2 <issue_comment>username_2: The specific guarantee of this type of singleton basically works like this:
1. Each class has a unique lock for class initialization.
2. Any action which could cause class initialization (such as accessing a static method) is required to first acquire this lock, check if the class needs to be initialized and, if so, initialize it.
3. If the JVM can determine that the class is already initialized and the current thread can see the effect of that, then it can skip step 2, including skipping acquiring the lock.
(This is documented in [§12.4.2](https://docs.oracle.com/javase/specs/jls/se9/html/jls-12.html#jls-12.4.2).)
In other words, what's guaranteed here is that all threads must at least see the effects of the assignment in `private static Singleton INSTANCE = new Singleton();`, and anything else that was performed during static initialization of the `SingletonHelper` class.
Your analysis that concurrent reads and writes of the non-`volatile` variable can be inconsistent between threads is correct, although the language specification isn't written in terms of caching. The way the language specification is written is that reads and writes can appear out of order. For example, suppose the following sequence of events, listed chronologically:
```
nonVolatileVariable is 0
ThreadA sets nonVolatileVariable to 1
ThreadB reads nonVolatileVariable (what value should it see?)
```
The language specification allows `ThreadB` to see the value 0 when it reads `nonVolatileVariable`, which is as if the events had happened in the following order:
```
nonVolatileVariable is 0
ThreadB reads nonVolatileVariable (and sees 0)
ThreadA sets nonVolatileVariable to 1
```
In practice, this is due to caching, but the language specification doesn't say what may or may not be cached (except [here](https://docs.oracle.com/javase/specs/jls/se9/html/jls-17.html#jls-17.3) and [here](https://docs.oracle.com/javase/specs/jls/se9/html/jls-17.html#jls-17.5), as a brief mention), it only specifies the order of events.
---
One extra note regarding thread-safety: some actions are always considered atomic, such as reads and writes of object references ([§17.7](https://docs.oracle.com/javase/specs/jls/se9/html/jls-17.html#jls-17.7)), so there are *some* cases where the use of a non-`volatile` variable can be considered thread-safe, but it depends on what you're specifically doing with it. There can still be memory inconsistency but concurrent reads and writes can't somehow *interleave*, so you can't end up with e.g. an invalid pointer value somehow. It's therefore sometimes safe to use non-`volatile` variables for e.g. lazily-initialized fields *if it doesn't matter that the initialization procedure could happen more than once*. I know of at least one place in the JDK where this is used, in [`java.lang.reflect.Field`](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/java/lang/reflect/Field.java#Field) (also see [this comment in the file](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/java/lang/reflect/Field.java#1072)), but it's not the norm.
Upvotes: 1
|
2018/03/17
| 1,372 | 4,759 |
<issue_start>username_0: First of all, I am expecting `--no-obsolete` would comment out `msgid` and `msgstr` if `gettext` is deleted, right?
How I am testing is:
1. I wrote `gettext("some string here")` in view
2. I ran `makemessages` command
3. It wrote a `.po` file as expected
4. Then I deleted `gettext()` line from view and saved file, verified `runserver` working.
5. I ran `makemessages --no-obsolete` and it has not made any changes to `.po` file.
`.po` file content extract .
```
#. Translators: This message is a test of wrap line
#: servers/views.py:31
msgid "Do let me know if it works."
msgstr ""
```
**dev environment**
>
> Django = 1.11
>
> OS = Mac/Ubuntu 14.04
>
>
>
**settings.py**
```
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
LOCALE = (
os.path.join(os.path.dirname(__file__), "locale"),
)
```<issue_comment>username_1: What the --no-obsolete does is to run a command called msgattrib with the --no-obsolete option on the content the po file. A typical case would be you generate your po file with makemessages, you get this:
```
#: servers/views.py:31
msgid "Do let me know if it works."
msgstr ""
```
Then you translate:
```
#: servers/views.py:31
msgid "Do let me know if it works."
msgstr "translation"
```
Then you remove the gettext entry, it'll still by default keep the translation, but mark it as obsolete.
```
#: servers/views.py:31
#~msgid "Do let me know if it works."
#~msgstr "translation"
```
If you set --no-obsolete option, then once your po file is done, it'll run msgattr with no-obsolete option. This will remove lines tagged with #~. See <https://linux.die.net/man/1/msgattrib>
But, the way makemessages is built, is that this will be called once the po file is written. But if there are no gettext in the files being processed, then it won't write to the po file. It'll just stop before getting to this msgattrib command. The po file you see is the one generated by the previous makemessages command. So the no-obsolete won't do anything.
There's no real solution to this. the no-obsolete option doesn't deal with the cases where you don't have any gettext to process.
Upvotes: 2 <issue_comment>username_2: So I think @JulienGrégoire was right about the fact that if there is no translation processed then the `--no-obsolete` won't work. There needs to be at least one translation captured for `--no-obsolete` to work.
But the solution to this quite simple. You can update your `settings.py` to define `LANGUAGES` like below
```
from django.utils.translation import ugettext_lazy as _
LANGUAGES = (
('en', _('English')),
('fr', _('French')),
)
```
Now your `settings` will always generate a translation. So it will make sure that you get `--no-obsolete` working every time you use it
Upvotes: 1 <issue_comment>username_3: Now with the help of Julien and Tarun, I found following observations.
`python manage.py makemessages -l`
If there is no `gettext` in the file being processed, the above command won't `write/update` `.po` file. That means if the corresponding `.po` file earlier had entries for `msgstr` and `msgid`, then it won't remove those entries unless file being processed had at least one `gettext`.
>
> Note: Above behavior is irrespective of `--no-obsolete`
>
>
>
**Now to make the `--no-obsolete` work as expected we need to follow the steps below.**
1. First thing run `python manage.py makemessages -l` , this would write `.po` file with `msgid` and `msgstr`.
2. Now set `msgstr` and run `python manage.py compilemessages -l` . This command writes `.mo` file in the same directory as `.po` file.
3. Now next time when you run `makemessages` again (**without --no-obsolete**), `.po` and `.mo` files are compared and missing/deleted `gettext` are commented in `.po` file.
4. And when you run `makemessages --no-obsolete`, commented entries are removed from the `.po` file.
`E.g`
if you have 3 `gettext` entries, and you run `makemessages` first time, it would write 3 `msgid` and 3 `msgstr` in `.po` file. Now if you remove all `gettext` entries, `.po` file won't be updated after you run `makemessages` again, but if your keep at least 1 `gettext` entry in same file and run `makemessages` again, it would delete all `msgid` and `msgstr` for deleted `gettext` entries.
But if you run `compilemessages` after `makemessages`, `.mo` file is created and then for subsequent `makemessages` commands `.po` and `.mo` files are compared and then `msgid` and `msgstr` is commented in `.po` file for deleted `gettext` entries.
Then finally when you run `makemessages` with `--no-obsolete` option the commented messages from `.po` files are deleted permanently.
Upvotes: 3 [selected_answer]
|
2018/03/17
| 1,233 | 4,245 |
<issue_start>username_0: Why the following code do not return me `0 => 'Zero'` for the first line but `0 => 0` ?
```
for ($i = 0; $i <= 30; $i += 1) {
if($i == 0) { $array[$i] = 'Zero'; }
$array[$i] = $i;
}
for ($i = 30; $i <= 100; $i += 5) {
$array[$i] = $i;
}
for ($i = 100; $i <= 200; $i += 10) {
$array[$i] = $i;
}
return $array;
```
Thanks.<issue_comment>username_1: What the --no-obsolete does is to run a command called msgattrib with the --no-obsolete option on the content the po file. A typical case would be you generate your po file with makemessages, you get this:
```
#: servers/views.py:31
msgid "Do let me know if it works."
msgstr ""
```
Then you translate:
```
#: servers/views.py:31
msgid "Do let me know if it works."
msgstr "translation"
```
Then you remove the gettext entry, it'll still by default keep the translation, but mark it as obsolete.
```
#: servers/views.py:31
#~msgid "Do let me know if it works."
#~msgstr "translation"
```
If you set --no-obsolete option, then once your po file is done, it'll run msgattr with no-obsolete option. This will remove lines tagged with #~. See <https://linux.die.net/man/1/msgattrib>
But, the way makemessages is built, is that this will be called once the po file is written. But if there are no gettext in the files being processed, then it won't write to the po file. It'll just stop before getting to this msgattrib command. The po file you see is the one generated by the previous makemessages command. So the no-obsolete won't do anything.
There's no real solution to this. the no-obsolete option doesn't deal with the cases where you don't have any gettext to process.
Upvotes: 2 <issue_comment>username_2: So I think @JulienGrégoire was right about the fact that if there is no translation processed then the `--no-obsolete` won't work. There needs to be at least one translation captured for `--no-obsolete` to work.
But the solution to this quite simple. You can update your `settings.py` to define `LANGUAGES` like below
```
from django.utils.translation import ugettext_lazy as _
LANGUAGES = (
('en', _('English')),
('fr', _('French')),
)
```
Now your `settings` will always generate a translation. So it will make sure that you get `--no-obsolete` working every time you use it
Upvotes: 1 <issue_comment>username_3: Now with the help of Julien and Tarun, I found following observations.
`python manage.py makemessages -l`
If there is no `gettext` in the file being processed, the above command won't `write/update` `.po` file. That means if the corresponding `.po` file earlier had entries for `msgstr` and `msgid`, then it won't remove those entries unless file being processed had at least one `gettext`.
>
> Note: Above behavior is irrespective of `--no-obsolete`
>
>
>
**Now to make the `--no-obsolete` work as expected we need to follow the steps below.**
1. First thing run `python manage.py makemessages -l` , this would write `.po` file with `msgid` and `msgstr`.
2. Now set `msgstr` and run `python manage.py compilemessages -l` . This command writes `.mo` file in the same directory as `.po` file.
3. Now next time when you run `makemessages` again (**without --no-obsolete**), `.po` and `.mo` files are compared and missing/deleted `gettext` are commented in `.po` file.
4. And when you run `makemessages --no-obsolete`, commented entries are removed from the `.po` file.
`E.g`
if you have 3 `gettext` entries, and you run `makemessages` first time, it would write 3 `msgid` and 3 `msgstr` in `.po` file. Now if you remove all `gettext` entries, `.po` file won't be updated after you run `makemessages` again, but if your keep at least 1 `gettext` entry in same file and run `makemessages` again, it would delete all `msgid` and `msgstr` for deleted `gettext` entries.
But if you run `compilemessages` after `makemessages`, `.mo` file is created and then for subsequent `makemessages` commands `.po` and `.mo` files are compared and then `msgid` and `msgstr` is commented in `.po` file for deleted `gettext` entries.
Then finally when you run `makemessages` with `--no-obsolete` option the commented messages from `.po` files are deleted permanently.
Upvotes: 3 [selected_answer]
|
2018/03/17
| 1,143 | 4,160 |
<issue_start>username_0: I'm new in Scala. I come across with this question:
If side effects (outputs, global variables ...) are not taken into account, how many implementations can the below method have? how can I figure it out in general?
```
def g[A,B,C](x: A,y: B,f:(A,B) => C): C
```<issue_comment>username_1: What the --no-obsolete does is to run a command called msgattrib with the --no-obsolete option on the content the po file. A typical case would be you generate your po file with makemessages, you get this:
```
#: servers/views.py:31
msgid "Do let me know if it works."
msgstr ""
```
Then you translate:
```
#: servers/views.py:31
msgid "Do let me know if it works."
msgstr "translation"
```
Then you remove the gettext entry, it'll still by default keep the translation, but mark it as obsolete.
```
#: servers/views.py:31
#~msgid "Do let me know if it works."
#~msgstr "translation"
```
If you set --no-obsolete option, then once your po file is done, it'll run msgattr with no-obsolete option. This will remove lines tagged with #~. See <https://linux.die.net/man/1/msgattrib>
But, the way makemessages is built, is that this will be called once the po file is written. But if there are no gettext in the files being processed, then it won't write to the po file. It'll just stop before getting to this msgattrib command. The po file you see is the one generated by the previous makemessages command. So the no-obsolete won't do anything.
There's no real solution to this. the no-obsolete option doesn't deal with the cases where you don't have any gettext to process.
Upvotes: 2 <issue_comment>username_2: So I think @JulienGrégoire was right about the fact that if there is no translation processed then the `--no-obsolete` won't work. There needs to be at least one translation captured for `--no-obsolete` to work.
But the solution to this quite simple. You can update your `settings.py` to define `LANGUAGES` like below
```
from django.utils.translation import ugettext_lazy as _
LANGUAGES = (
('en', _('English')),
('fr', _('French')),
)
```
Now your `settings` will always generate a translation. So it will make sure that you get `--no-obsolete` working every time you use it
Upvotes: 1 <issue_comment>username_3: Now with the help of Julien and Tarun, I found following observations.
`python manage.py makemessages -l`
If there is no `gettext` in the file being processed, the above command won't `write/update` `.po` file. That means if the corresponding `.po` file earlier had entries for `msgstr` and `msgid`, then it won't remove those entries unless file being processed had at least one `gettext`.
>
> Note: Above behavior is irrespective of `--no-obsolete`
>
>
>
**Now to make the `--no-obsolete` work as expected we need to follow the steps below.**
1. First thing run `python manage.py makemessages -l` , this would write `.po` file with `msgid` and `msgstr`.
2. Now set `msgstr` and run `python manage.py compilemessages -l` . This command writes `.mo` file in the same directory as `.po` file.
3. Now next time when you run `makemessages` again (**without --no-obsolete**), `.po` and `.mo` files are compared and missing/deleted `gettext` are commented in `.po` file.
4. And when you run `makemessages --no-obsolete`, commented entries are removed from the `.po` file.
`E.g`
if you have 3 `gettext` entries, and you run `makemessages` first time, it would write 3 `msgid` and 3 `msgstr` in `.po` file. Now if you remove all `gettext` entries, `.po` file won't be updated after you run `makemessages` again, but if your keep at least 1 `gettext` entry in same file and run `makemessages` again, it would delete all `msgid` and `msgstr` for deleted `gettext` entries.
But if you run `compilemessages` after `makemessages`, `.mo` file is created and then for subsequent `makemessages` commands `.po` and `.mo` files are compared and then `msgid` and `msgstr` is commented in `.po` file for deleted `gettext` entries.
Then finally when you run `makemessages` with `--no-obsolete` option the commented messages from `.po` files are deleted permanently.
Upvotes: 3 [selected_answer]
|
2018/03/17
| 566 | 1,846 |
<issue_start>username_0: ```
struct MinHeap{
int size;
int* array;
};
void minHeapify(struct MinHeap* minHeap, int idx)
{
int smallest, left, right;
smallest = idx;
left = 2 * idx + 1;
right = 2 * idx + 2;
if (left < minHeap->size && minHeap->array[left]->key < minHeap-
>array[smallest]->key )
smallest = left;
if (right < minHeap->size && minHeap->array[right]->key < minHeap-
>array[smallest]->key )
smallest = right;
if (smallest != idx)
{
// The nodes to be swapped in min heap
MinHeapNode *smallestNode = minHeap->array[smallest];
MinHeapNode *idxNode = minHeap->array[idx];
// Swap positions
minHeap->pos[smallestNode->v] = idx;
minHeap->pos[idxNode->v] = smallest;
// Swap nodes
swapMinHeapNode(&minHeap->array[smallest], &minHeap->array[idx]);
minHeapify(minHeap, smallest);
}
}
```
It shows Invalid type argument of '->' (have 'int') in the minheapify function wherever '->' is used. Please tell me what to do here because the other functions are working well.<issue_comment>username_1: 1. `minHeap->array[left]` is integer value(not a pointer to a structure), so `minHeap->array[left]->key` produces compilation error.
2. `MinHeapNode *smallestNode = minHeap->array[smallest];` transforms `int` into MinHeapNode\*, which is probably not an `int`
Upvotes: 0 <issue_comment>username_2: You have
```
minHeap->array[left]->key
```
of which `minHeap` is of type pointer to `struct MinHeap`.
Therefor `minheap->`accesses a member of the `structMinHeap`.
Because of `minHeap->array`, the accessed member is `array` which is of type pointer to `int`.
With the `[left]` you treat that pointer like an array, which is OK. The result is an `int`.
So what you are applying the next `->` to should be a pointer, but is an `int`.
This is what the compiler is telling you.
Upvotes: 1
|
2018/03/17
| 1,301 | 4,103 |
<issue_start>username_0: Consider the following piece of code:
```
#include
class A
{
public:
explicit A(uint8\_t p\_a){ m\_a = p\_a; };
uint8\_t get\_a() const {return m\_a;}
private:
uint8\_t m\_a;
};
int main()
{
A a {0x21U};
A aa{0x55U};
uint8\_t mask{a.get\_a() | aa.get\_a()};
return 0;
}
```
When I try to compile this (`gcc 5.4.0`) I get the following error:
```
main.cpp: In function ‘int main()’:
main.cpp:20:28: warning: narrowing conversion of ‘(int)(a.A::get_a() | aa.A::get_a())’ from ‘int’ to ‘uint8_t {aka unsigned char}’ inside { } [-Wnarrowing]
uint8_t mask{a.get_a() | aa.get_a()};
```
I don't really understand why there is any narrowing at all. The `int` type is never used anywhere in my code, everything is written in terms of `unsigned char`s. Even if I explicitly cast to `unsigned char` I get the error:
```
uint8_t mask{static_cast(a.get\_a()) | static\_cast(aa.get\_a())};
```
In fact, to solve this, I need to remove the `{}`-initialization. Then it works:
```
uint8_t mask = a.get_a() | aa.get_a();
```
Why is this necessary?<issue_comment>username_1: ### [Integral promotion](http://en.cppreference.com/w/cpp/language/implicit_conversion#Integral_promotion) case:
>
> [prvalues](http://en.cppreference.com/w/cpp/language/value_category#prvalue) of small integral types (such as `char`) may be converted to
> prvalues of larger integral types (such as `int`).
>
>
>
`a.get_a() | aa.get_a()` - this is prvalue expression
Upvotes: 0 <issue_comment>username_2: List initialization is more strict and this is why you get warning.
```
uint8_t mask{a.get_a() | aa.get_a()};
```
The expression inside the braces
```
auto i = a.get_a() | aa.get_a(); // i is int
```
is promoted to `int` and since `int` cannot entirely fit into `uint8_t`, the narrowing warning is issued based on [this rule](http://en.cppreference.com/w/cpp/language/aggregate_initialization):
>
> If the initializer clause is an expression, implicit conversions are
> allowed as per copy-initialization, except if they are narrowing (as
> in list-initialization) (since C++11).
>
>
>
Upvotes: 0 <issue_comment>username_3: You were close with this:
```
uint8_t mask{static_cast(a.get\_a()) | static\_cast(aa.get\_a())};
```
But `a.get_a()` and `aa.get_a()` are already `uint8_t`, so the cast does nothing.
It's the `|` operation that:
* converts both operands to `int` (after you can do anything about it)
* evaluates to an `int`
So it's the whole expression you now need to subsequently convert:
```
uint8_t mask{static_cast(a.get\_a() | aa.get\_a())};
```
You were also right to try dropping the `{}`-initialisation, which is probably what I'd do too. You just don't need its strictness here.
```
uint8_t mask = a.get_a() | aa.get_a();
```
This is clear, concise and correct.
Upvotes: 4 [selected_answer]<issue_comment>username_4: Most binary arithmetic operations including the `|` bitwise-or that appears here force their subexpressions to be promoted, which is to say they will be at least `int` or `unsigned int` in rank.
C++ 17 [expr] paragraph 11:
>
> Many binary operators that expect operands of arithmetic or enumeration type cause conversions and yield result types in a similar way. The purpose is to yield a common type, which is also the type of the result. This pattern is called the *usual arithmetic conversions*, which are defined as follows:
>
>
> * If either operand is of scoped enumeration type, ...
> * If either operand is of type `long double`, ...
> * Otherwise, if either operand is `double`, ...
> * Otherwise, if either operand is `float`, ...
> * Otherwise, the integral promotions shall be performed on both operands. Then the following rules shall be applied to the promoted operands: ...
>
>
>
The [integral promotions](http://en.cppreference.com/w/cpp/language/implicit_conversion#Integral_promotion) here are what cause the `get_a()` values to change from `uint8_t` to `int`. So the result of the `|` expression is also an `int`, and narrowing it to initialize another `uint8_t` is ill-formed.
Upvotes: 2
|
2018/03/17
| 1,125 | 3,778 |
<issue_start>username_0: We are five students in a team and we must work in the same project using git.
What i did is:
1. create an empty project
2. add gitignore file
The gitignore file
contains:
```
*.class
nbproject/private/
build/
nbbuild/
dist/
nbdist/
nbactions.xml
nb-configuration.xml
```
3. git init
4. git add .
5. git commit -m "Initial commit"
6. set up the remote and push
But we have a conflict in nbproject/private/private.properties file.
This file contains:
```
user.properties.file=C:\\Users\\Houssem\\AppData\\Roaming\\NetBeans\\8.2\\build.properties
user.properties.file=C:\\Users\\ASUS\\AppData\\Roaming\\NetBeans\\8.2\\build.properties
```
One of us he had cloned the repository and he can't add any java class in his local project.<issue_comment>username_1: ### [Integral promotion](http://en.cppreference.com/w/cpp/language/implicit_conversion#Integral_promotion) case:
>
> [prvalues](http://en.cppreference.com/w/cpp/language/value_category#prvalue) of small integral types (such as `char`) may be converted to
> prvalues of larger integral types (such as `int`).
>
>
>
`a.get_a() | aa.get_a()` - this is prvalue expression
Upvotes: 0 <issue_comment>username_2: List initialization is more strict and this is why you get warning.
```
uint8_t mask{a.get_a() | aa.get_a()};
```
The expression inside the braces
```
auto i = a.get_a() | aa.get_a(); // i is int
```
is promoted to `int` and since `int` cannot entirely fit into `uint8_t`, the narrowing warning is issued based on [this rule](http://en.cppreference.com/w/cpp/language/aggregate_initialization):
>
> If the initializer clause is an expression, implicit conversions are
> allowed as per copy-initialization, except if they are narrowing (as
> in list-initialization) (since C++11).
>
>
>
Upvotes: 0 <issue_comment>username_3: You were close with this:
```
uint8_t mask{static_cast(a.get\_a()) | static\_cast(aa.get\_a())};
```
But `a.get_a()` and `aa.get_a()` are already `uint8_t`, so the cast does nothing.
It's the `|` operation that:
* converts both operands to `int` (after you can do anything about it)
* evaluates to an `int`
So it's the whole expression you now need to subsequently convert:
```
uint8_t mask{static_cast(a.get\_a() | aa.get\_a())};
```
You were also right to try dropping the `{}`-initialisation, which is probably what I'd do too. You just don't need its strictness here.
```
uint8_t mask = a.get_a() | aa.get_a();
```
This is clear, concise and correct.
Upvotes: 4 [selected_answer]<issue_comment>username_4: Most binary arithmetic operations including the `|` bitwise-or that appears here force their subexpressions to be promoted, which is to say they will be at least `int` or `unsigned int` in rank.
C++ 17 [expr] paragraph 11:
>
> Many binary operators that expect operands of arithmetic or enumeration type cause conversions and yield result types in a similar way. The purpose is to yield a common type, which is also the type of the result. This pattern is called the *usual arithmetic conversions*, which are defined as follows:
>
>
> * If either operand is of scoped enumeration type, ...
> * If either operand is of type `long double`, ...
> * Otherwise, if either operand is `double`, ...
> * Otherwise, if either operand is `float`, ...
> * Otherwise, the integral promotions shall be performed on both operands. Then the following rules shall be applied to the promoted operands: ...
>
>
>
The [integral promotions](http://en.cppreference.com/w/cpp/language/implicit_conversion#Integral_promotion) here are what cause the `get_a()` values to change from `uint8_t` to `int`. So the result of the `|` expression is also an `int`, and narrowing it to initialize another `uint8_t` is ill-formed.
Upvotes: 2
|
2018/03/17
| 670 | 2,756 |
<issue_start>username_0: I'm trying to get the same result as when I use @Valid in object parameter from a Controller. When the object is invalid an exception (MethodArgumentNotValidException) is throw by my ExceptionHandlerController who has @RestControllerAdvice.
In my case I want to validate an object, but I only can validate it in service layer. The object have bean validation annotations, so I'm trying to programmatically throw MethodArgumentNotValidException for my ExceptionHandlerController handle it, but I'm not having success.
So far I have this:
```
private void verifyCard(CardRequest card) {
BeanPropertyBindingResult result = new BeanPropertyBindingResult(card, "card");
SpringValidatorAdapter adapter = new SpringValidatorAdapter(this.validator);
adapter.validate(card, result);
if (result.hasErrors()) {
try {
throw new MethodArgumentNotValidException(null, result);
} catch (MethodArgumentNotValidException e) {
e.printStackTrace();
}
}
}
```
The first parameter is from type MethodParameter and I'm not been able to create this object. Is it the best way to handle my problem?
EDIT 1:
I can't remove the try/catch block. When I remove it I get compile error. How to work around?<issue_comment>username_1: You have already handled it by the `catch` block, you should remove `try-catch` to your global handler catch it.
then specify the method like below
```
private void verifyCard(CardRequest card) throws MethodArgumentNotValidException
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: `MethodArgumentNotValidException` is a subclass of `Exception`. This means that it's "checked": To throw it out of your `verifyCard(..)` method, you have to declare that `verifyCard(..)` can throw it:
```
private void verifyCard(CardRequest card) throws MethodArgumentNotValidException {
// your code
}
```
Upvotes: 2 <issue_comment>username_3: If you have `lombok` dependency in your project, you can also fake compiler by using `@SneakyThrows` annotation.
<https://projectlombok.org/features/SneakyThrows>
Upvotes: 2 <issue_comment>username_4: ```
throw new MethodArgumentNotValidException(null, result);
```
Above constructor will not work as method parameter is necessary.
Valid constructor ([reference](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/web/bind/MethodArgumentNotValidException.html#constructor.detail)) is:
```
MethodArgumentNotValidException(MethodParameter parameter, BindingResult bindingResult);
```
Hence, in your case:
```
throw new MethodArgumentNotValidException(new MethodParameter(
this.getClass().getDeclaredMethod("verifyCard", YourClassName.class), 0), errors);
```
Upvotes: 2
|
2018/03/17
| 849 | 2,058 |
<issue_start>username_0: I am trying to take a list that has IP address and port numbers and print the data out to be in this format `127.0.0.1:21,80,443`. Here is a sample of dummy data.
```
127.0.0.1
80
127.0.0.1
443
192.168.1.1
21
192.168.1.2
22
192.168.1.2
3389
192.168.1.2
5900
```
I would like this data to output as stated above. Right now, I have the data in a list and am looking to associate the ports with the IP addresses so it is not repeating the IP address to each port. This data should output to:
```
127.0.0.1:80,443
192.168.1.1:21
192.168.1.2:22,3389,5900
```<issue_comment>username_1: You can use format - instead of just printing every element of the list do the following:
```
print "{0} : {1}".format(list[0], list[1])
```
You haven't specified what the input is, though. You should add it next time.
Upvotes: 0 <issue_comment>username_2: Using a [defaultdict](https://docs.python.org/3/library/collections.html#collections.defaultdict) you can collect the ports for each address, and the print them out all at once like:
```
from collections import defaultdict
address_to_ports = defaultdict(list)
with open('file1') as f:
for address in f:
address_to_ports[address.strip()].append(next(f).strip())
print(address_to_ports)
print(['{}:{}'.format(a, ','.join(p)) for a, p in address_to_ports.items()])
```
### Results:
```
defaultdict(, {'127.0.0.1': ['80', '443'], '192.168.1.1': ['21'], '192.168.1.2': ['22', '3389', '5900']})
['127.0.0.1:80,443', '192.168.1.1:21', '192.168.1.2:22,3389,5900']
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: Let us suppose your list is in a file called 'ip\_list.txt'
The code
```
f = open("ip_list.txt","r")
count = 2
dict_of_ip = {}
ip = ''
for i in f:
if count%2 == 0:
if i.strip() not in dict_of_ip.keys():
ip = i.strip()
dict_of_ip[ip] = []
else:
dict_of_ip[ip].append(i.strip())
count = count + 1
print(dict_of_ip)
```
Output:
```
{'192.168.1.1': ['21'], '127.0.0.1': ['80', '443']}
```
Upvotes: 0
|
2018/03/17
| 856 | 2,834 |
<issue_start>username_0: I'm trying to do something very simple, I want to do a POST with AJAX Jquery to a Node.js server and have the server return a response. My problem is that I can't get that answer from the server. If someone can help me, I will be very grateful.
client.js
```
$(document).ready(function(){
$("button").click(function(){
$.post("http://localhost:3333/vrp",
{
name: "<NAME>",
city: "Duckburg"
},
function(data,status){
alert("Data: " + data + "\nStatus: " + status); //This section code doesn't execute.
})
.fail(function() {
alert( "error" ); //It's executed this section of code and I can see it in my browser.
});
});
});
```
server.js
```
var vrp = require("./vrp");
var bodyParser = require("body-parser");
var express = require('express');
var app = express();
app.use(bodyParser.urlencoded({ extended: false }));
app.use(bodyParser.json());
app.post('/vrp', function(req, res){
console.log(JSON.stringify(req.body)); //Here I can see the message I sent.
res.contentType('json');
res.send(JSON.stringify(req.body));
});
var listener = app.listen(3333, function () {
console.log('Your app is listening on port ' +
listener.address().port);
});
```<issue_comment>username_1: ```
app.post('/vrp', function(req, res){
console.log(JSON.stringify(req.body));
return res.status(200).send({data:req.body});
});
```
Upvotes: 0 <issue_comment>username_2: Try in server.js add CORS header, eg. like this
```
app.post('/vrp', function(req, res){
console.log(JSON.stringify(req.body));
res.header("Access-Control-Allow-Origin", "*").send(req.body);
});
```
If this run, then you have 100% sure that was CORS problem.
For real app you can use this solution or more complex like middleware, eg.
```
app.use(function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
next();
});
```
Or you can use cors middleware module <https://github.com/expressjs/cors> with more elastic configuration.
Upvotes: 2 [selected_answer]<issue_comment>username_3: Latest way to do this with Node.js LTS version 16.13.1
Simply install cors middleware module:
```
npm install --save cors
```
And use it in your post route like this:
```
// Other parts of your code
var express = require('express');
var app = express();
var cors = require('cors');
// app.use(cors()); // if you want cors to be used for all routes
app.post('/vrp', cors(), (req, res) => {
console.log(JSON.stringify(req.body));
res.send(req.body);
});
```
Pretty neat :).
If you use `app.use(cors())`, then you don't have to include the middleware as a parameter in individual routes; the middleware will be used for all routes
Upvotes: 0
|
2018/03/17
| 445 | 1,537 |
<issue_start>username_0: I am making a request to an API that I have on my computer (192.168.1.132) with react native from my phone (in the same network):
```py
fetch('http://192.168.1.132:8000/test', {
method: 'GET',
headers: { 'Authorization': 'Bearer 4uFPmkP5326DXcRuHDKjRRrmhdeIBJ'},
credentials: 'same-origin'
})
.then((response) => response.json())
.then((responseJson) => {
console.log('RESPONSE: ' + JSON.stringify(responseJson))
})
```
But I am sniffing the request and it has not the Authorization header (any other that I put will appear, but not Authorization). In Xcode I have enabled 'AllowArbitraryLoads'. So my API returns 401 (Unauthorized) because obviously there is no Authorization header. Why is it not included in my request?<issue_comment>username_1: You need a trailing slash on your url for headers to attach. As pointed out in the comments, make credentials say 'include'. If your API attaches a CSRF token in the pre-flight, then it will include that cookie as well.
Upvotes: 3 <issue_comment>username_2: First I was just using the following in my **Node Express API**:
```
app.use(function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept, Authentication");
next();
});
```
It didn't work fine from **react app** but it worked from Insomnia directly request, then I needed to add the following:
```
const cors = require('cors');
app.options('*', cors())
```
Upvotes: 1
|
2018/03/17
| 586 | 1,717 |
<issue_start>username_0: Window 10, Python 3.6
I have a dataframe df
```
df=pd.DataFrame({'name':['boo', 'foo', 'too', 'boo', 'roo', 'too'],
'zip':['30004', '02895', '02895', '30750', '02895', '02895']})
```
I want to find the repeat record that has same 'name' and 'zip', and record the repeat times. The idea output is
```
name repeat zip
0 too 1 02895
```
Because my dataframe is much more than six rows, I need to use a iterate method. I appreciate any tips.<issue_comment>username_1: I believe you need [`groupby`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html) all columns and use [`GroupBy.size`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html):
```
#create DataFrame from online source
#df = pd.read_csv('someonline.csv')
#df = pd.read_html('someurl')[0]
#L = []
#for x in iterator:
#in loop added data to list
# L.append(x)
##created DataFrame from contructor
#df = pd.DataFrame(L)
df = df.groupby(df.columns.tolist()).size().reset_index(name='repeat')
#if need specify columns
#df = df.groupby(['name','zip']).size().reset_index(name='repeat')
print (df)
name zip repeat
0 boo 30004 1
1 boo 30750 1
2 foo 02895 1
3 roo 02895 1
4 too 02895 2
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Pandas has a handy `.duplicated()` method that can help you identify duplicates.
```
df.duplicated()
```
By passing the duplicate vector into a selection you can get the duplicate record:
```
df[df.duplicated()]
```
You can get the sum of the duplicated records by using `.sum()`
```
df.duplicated().sum()
```
Upvotes: 1
|
2018/03/17
| 1,729 | 5,686 |
<issue_start>username_0: I have successfully been using the gTTS module in order to get audio from Google Translate for a while. I use it quite sparsely (I must have made 25 requests in total), and don't believe I could have hit any kind of limit that would cause my address to be blocked from using the service.
However, today, after trying to use it (I haven't used it in 1-2 months), I got the following program:
```
from gtts import gTTS
tts = gTTS('hallo', 'de')
tts.save('hallo.mp3')
```
To cause an error. I tracked down the problem, and I managed to see that even this simple program:
```
import requests
response = requests.get("https://translate.google.com/")
```
Causes the following error:
```
Traceback (most recent call last):
File "C:\...\lib\site-packages\urllib3\connectionpool.py", line 601, in urlopen
chunked=chunked)
File "C:\...\lib\site-packages\urllib3\connectionpool.py", line 346, in _make_request
self._validate_conn(conn)
File "C:\...\lib\site-packages\urllib3\connectionpool.py", line 850, in _validate_conn
conn.connect()
File "C:\...\lib\site-packages\urllib3\connection.py", line 326, in connect
ssl_context=context)
File "C:\...\lib\site-packages\urllib3\util\ssl_.py", line 329, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "C:\...\lib\ssl.py", line 407, in wrap_socket
_context=self, _session=session)
File "C:\...\lib\ssl.py", line 814, in __init__
self.do_handshake()
File "C:\...\lib\ssl.py", line 1068, in do_handshake
self._sslobj.do_handshake()
File "C:\...\lib\ssl.py", line 689, in do_handshake
self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:777)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\...\lib\site-packages\requests\adapters.py", line 440, in send
timeout=timeout
File "C:\...\lib\site-packages\urllib3\connectionpool.py", line 639, in urlopen
_stacktrace=sys.exc_info()[2])
File "C:\...\lib\site-packages\urllib3\util\retry.py", line 388, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='translate.google.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:777)'),))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main2.py", line 2, in
response = requests.get("https://translate.google.com/")
File "C:\...\lib\site-packages\requests\api.py", line 72, in get
return request('get', url, params=params, \*\*kwargs)
File "C:\...\lib\site-packages\requests\api.py", line 58, in request
return session.request(method=method, url=url, \*\*kwargs)
File "C:\...\lib\site-packages\requests\sessions.py", line 508, in request
resp = self.send(prep, \*\*send\_kwargs)
File "C:\...\lib\site-packages\requests\sessions.py", line 618, in send
r = adapter.send(request, \*\*kwargs)
File "C:\...\lib\site-packages\requests\adapters.py", line 506, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='translate.google.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (\_ssl.c:777)'),))
```
I would like to know if anyone has an idea what the issue could be. I can get on the Google Translate website without any problems from my browser, and have no issues using the audio either.<issue_comment>username_1: This looks like an error related to your proxy setting, especially if you are using your work PC. I have got the same issue, but different error message, for example:
>
> gTTSError: Connection error during token calculation:
> HTTPSConnectionPool(host='translate.google.com', port=443): Max
> retries exceeded with url: / (Caused by SSLError(SSLError("bad
> handshake: Error([('SSL routines', 'ssl3\_get\_server\_certificate',
> 'certificate verify failed')],)",),))
>
>
>
To further investigate the issue, you can debug it in the command line.
```
(base) c:\gtts-cli "sample text to debug" --debug --output test.mp3
```
you should see results as below;
>
> ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 407 Proxy Authentication Required',)))
>
>
>
Solution:
I have checked the gTTs documentation, there is no way to pass your proxy setting to the api. so the work around is ignore the ssl verification, which in not available also in gTTs. so the only way to do it is to change the following gtts files:
1. tts.py, in line 208 chage the request function to add verifiy=false
```
r = requests.get(self.GOOGLE_TTS_URL,
params=payload,
headers=self.GOOGLE_TTS_HEADERS,
proxies=urllib.request.getproxies(),
verify=False)
```
2. file lang.py, line 56
>
>
> ```
> page = requests.get(URL_BASE, verify=False)
>
> ```
>
>
Then, try again the debug command line. you should be able to get the file recorded now
```
(base) c:\gtts-cli "sample text to debug" --debug --output test.mp3
gtts.tts - DEBUG - status-0: 200
gtts.tts - DEBUG - part-0 written to <_io.BufferedWriter name=test.mp3'>
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Accepted answer did not work for me since the code has changed, the way i got it to work was to add verify=False in gtts\_token.py instead
```
response = requests.get("https://translate.google.com/", verify=False)
```
Upvotes: 2
|
2018/03/17
| 706 | 2,554 |
<issue_start>username_0: AFAIK after block validation node runs all transactions in the block, changing the state (list of UTXOs)
Let's imagine that at some point node realizes that it was on the wrong chain and there is longer chain available, which forked some blocks before.
How does it make the switch? I imagine that it should run all transactions in reverse till the fork happened to restore the state and than replay all transactions in the blocks from the longer chain?
Thanks!<issue_comment>username_1: This looks like an error related to your proxy setting, especially if you are using your work PC. I have got the same issue, but different error message, for example:
>
> gTTSError: Connection error during token calculation:
> HTTPSConnectionPool(host='translate.google.com', port=443): Max
> retries exceeded with url: / (Caused by SSLError(SSLError("bad
> handshake: Error([('SSL routines', 'ssl3\_get\_server\_certificate',
> 'certificate verify failed')],)",),))
>
>
>
To further investigate the issue, you can debug it in the command line.
```
(base) c:\gtts-cli "sample text to debug" --debug --output test.mp3
```
you should see results as below;
>
> ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 407 Proxy Authentication Required',)))
>
>
>
Solution:
I have checked the gTTs documentation, there is no way to pass your proxy setting to the api. so the work around is ignore the ssl verification, which in not available also in gTTs. so the only way to do it is to change the following gtts files:
1. tts.py, in line 208 chage the request function to add verifiy=false
```
r = requests.get(self.GOOGLE_TTS_URL,
params=payload,
headers=self.GOOGLE_TTS_HEADERS,
proxies=urllib.request.getproxies(),
verify=False)
```
2. file lang.py, line 56
>
>
> ```
> page = requests.get(URL_BASE, verify=False)
>
> ```
>
>
Then, try again the debug command line. you should be able to get the file recorded now
```
(base) c:\gtts-cli "sample text to debug" --debug --output test.mp3
gtts.tts - DEBUG - status-0: 200
gtts.tts - DEBUG - part-0 written to <_io.BufferedWriter name=test.mp3'>
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Accepted answer did not work for me since the code has changed, the way i got it to work was to add verify=False in gtts\_token.py instead
```
response = requests.get("https://translate.google.com/", verify=False)
```
Upvotes: 2
|
2018/03/17
| 2,101 | 5,670 |
<issue_start>username_0: Why the output of this program is `4`?
```
#include
int main()
{
short A[] = {1, 2, 3, 4, 5, 6};
std::cout << \*(short\*)((char\*)A + 7) << std::endl;
return 0;
}
```
From my understanding, on x86 little endian system, where char has 1 byte, and short 2 bytes, the output should be `0x0500`, because the data in array `A` is as fallow in hex:
```
01 00 02 00 03 00 04 00 05 00 06 00
```
We move from the beginning 7 bytes forward, and then read 2 bytes. What I'm missing?<issue_comment>username_1: ~~**This is arguably a bug in GCC.**~~
First, it is to be noted that your code is invoking undefined behavior, due to violation of the rules of strict aliasing.
~~With that said, here's why I consider it a bug:~~
1. The same expression, when first assigned to an intermediate `short` or `short *`, causes the expected behavior. It's only when passing the expression directly as a function argument, does the unexpected behavior manifest.
2. It occurs even when compiled with `-O0 -fno-strict-aliasing`.
I re-wrote your code in C to eliminate the possibility of any C++ craziness. Your question ~~is~~ *was* tagged `c` after all! I added the `pshort` function to ensure that the variadic nature `printf` wasn't involved.
```
#include
static void pshort(short val)
{
printf("0x%hx ", val);
}
int main(void)
{
short A[] = {1, 2, 3, 4, 5, 6};
#define EXP ((short\*)((char\*)A + 7))
short \*p = EXP;
short q = \*EXP;
pshort(\*p);
pshort(q);
pshort(\*EXP);
printf("\n");
return 0;
}
```
After compiling with `gcc (GCC) 7.3.1 20180130 (Red Hat 7.3.1-2)`:
```
gcc -O0 -fno-strict-aliasing -g -Wall -Werror endian.c
```
Output:
```
0x500 0x500 0x4
```
It appears that **GCC is actually generating different code** when the expression is used directly as an argument, even though I'm clearly using the same expression (`EXP`).
Dumping with `objdump -Mintel -S --no-show-raw-insn endian`:
```
int main(void)
{
40054d: push rbp
40054e: mov rbp,rsp
400551: sub rsp,0x20
short A[] = {1, 2, 3, 4, 5, 6};
400555: mov WORD PTR [rbp-0x16],0x1
40055b: mov WORD PTR [rbp-0x14],0x2
400561: mov WORD PTR [rbp-0x12],0x3
400567: mov WORD PTR [rbp-0x10],0x4
40056d: mov WORD PTR [rbp-0xe],0x5
400573: mov WORD PTR [rbp-0xc],0x6
#define EXP ((short*)((char*)A + 7))
short *p = EXP;
400579: lea rax,[rbp-0x16] ; [rbp-0x16] is A
40057d: add rax,0x7
400581: mov QWORD PTR [rbp-0x8],rax ; [rbp-0x08] is p
short q = *EXP;
400585: movzx eax,WORD PTR [rbp-0xf] ; [rbp-0xf] is A plus 7 bytes
400589: mov WORD PTR [rbp-0xa],ax ; [rbp-0xa] is q
pshort(*p);
40058d: mov rax,QWORD PTR [rbp-0x8] ; [rbp-0x08] is p
400591: movzx eax,WORD PTR [rax] ; *p
400594: cwde
400595: mov edi,eax
400597: call 400527
pshort(q);
40059c: movsx eax,WORD PTR [rbp-0xa] ; [rbp-0xa] is q
4005a0: mov edi,eax
4005a2: call 400527
pshort(\*EXP);
4005a7: movzx eax,WORD PTR [rbp-0x10] ; [rbp-0x10] is A plus 6 bytes \*\*\*\*\*\*\*\*
4005ab: cwde
4005ac: mov edi,eax
4005ae: call 400527
printf("\n");
4005b3: mov edi,0xa
4005b8: call 400430
return 0;
4005bd: mov eax,0x0
}
4005c2: leave
4005c3: ret
```
---
* I get the same result with GCC 4.9.4 and GCC 5.5.0 from Docker hub
Upvotes: 2 <issue_comment>username_2: You are violating strict aliasing rules here. You can't just read half-way into an object and pretend it's an object all on its own. You can't invent hypothetical objects using byte offsets like this. GCC is perfectly within its rights to do crazy sh!t like going back in time and murdering Elvis Presley, when you hand it your program.
What you *are* allowed to do is inspect and manipulate the bytes that make up an arbitrary object, using a `char*`. Using that privilege:
```
#include
#include
int main()
{
short A[] = {1, 2, 3, 4, 5, 6};
short B;
std::copy(
(char\*)A + 7,
(char\*)A + 7 + sizeof(short),
(char\*)&B
);
std::cout << std::showbase << std::hex << B << std::endl;
}
// Output: 0x500
```
### ([live demo](http://coliru.stacked-crooked.com/a/5a09d40aac97b9b3))
But you can't just "make up" a non-existent object in the original collection.
Furthermore, even if you have a compiler that can be told to ignore this problem (e.g. with GCC's `-fno-strict-aliasing` switch), the made-up object is not correctly *aligned* for any current mainstream architecture. A `short` cannot legally live at that odd-numbered location in memory†, so you doubly can't pretend there is one there. There's just no way to get around how undefined the original code's behaviour is; in fact, if you pass GCC the `-fsanitize=undefined` switch it will tell you as much.
† I'm simplifying a little.
Upvotes: 6 [selected_answer]<issue_comment>username_3: The program has undefined behaviour due to casting an incorrectly aligned pointer to `(short*)`. This breaks the rules in 6.3.2.3 p6 in C11, which is nothing to do with strict aliasing as claimed in other answers:
>
> A pointer to an object type may be converted to a pointer to a different object type. If the resulting pointer is not correctly aligned for the referenced type, the behavior is undefined.
>
>
>
In [expr.static.cast] p13 C++ says that converting the unaligned `char*` to `short*` gives an unspecified pointer value, which might be an invalid pointer, which can't be dereferenced.
The correct way to inspect the bytes is through the `char*` not by casting back to `short*` and pretending there is a `short` at an address where a `short` cannot live.
Upvotes: 4
|
2018/03/17
| 2,185 | 6,370 |
<issue_start>username_0: Scenario: I have a graph, represented as a collection of nodes (0...n). There are no edges in this graph.
To this graph, I connect nodes at random, one at a time. An alternative way of saying this would be that I add random edges to the graph, one at a time.
I do not want to create simple cycles in this graph.
Is there a simple and/or very efficient way to track the creation of cycles as I add random edges? With a graph *traversal*, it is easy, since we only need to track the two end nodes of a single path. But, with this situation, we have any number of paths that we need to track - and sometimes these paths combine into a larger path, and we need to track that too.
I have tried several approaches, which mostly come down to maintaining a list of "outer nodes" and a set of nodes internal to them, and then when I add an edge going through it and updating it. But, it becomes extremely convoluted, especially if I *remove* an edge in the graph.
I have attempted to search out algorithms or discussions on this, and I can't really find anything. I know I can do a BFS to check for cycles, but it's so so so horribly inefficient to BFS after every single edge addition.<issue_comment>username_1: ~~**This is arguably a bug in GCC.**~~
First, it is to be noted that your code is invoking undefined behavior, due to violation of the rules of strict aliasing.
~~With that said, here's why I consider it a bug:~~
1. The same expression, when first assigned to an intermediate `short` or `short *`, causes the expected behavior. It's only when passing the expression directly as a function argument, does the unexpected behavior manifest.
2. It occurs even when compiled with `-O0 -fno-strict-aliasing`.
I re-wrote your code in C to eliminate the possibility of any C++ craziness. Your question ~~is~~ *was* tagged `c` after all! I added the `pshort` function to ensure that the variadic nature `printf` wasn't involved.
```
#include
static void pshort(short val)
{
printf("0x%hx ", val);
}
int main(void)
{
short A[] = {1, 2, 3, 4, 5, 6};
#define EXP ((short\*)((char\*)A + 7))
short \*p = EXP;
short q = \*EXP;
pshort(\*p);
pshort(q);
pshort(\*EXP);
printf("\n");
return 0;
}
```
After compiling with `gcc (GCC) 7.3.1 20180130 (Red Hat 7.3.1-2)`:
```
gcc -O0 -fno-strict-aliasing -g -Wall -Werror endian.c
```
Output:
```
0x500 0x500 0x4
```
It appears that **GCC is actually generating different code** when the expression is used directly as an argument, even though I'm clearly using the same expression (`EXP`).
Dumping with `objdump -Mintel -S --no-show-raw-insn endian`:
```
int main(void)
{
40054d: push rbp
40054e: mov rbp,rsp
400551: sub rsp,0x20
short A[] = {1, 2, 3, 4, 5, 6};
400555: mov WORD PTR [rbp-0x16],0x1
40055b: mov WORD PTR [rbp-0x14],0x2
400561: mov WORD PTR [rbp-0x12],0x3
400567: mov WORD PTR [rbp-0x10],0x4
40056d: mov WORD PTR [rbp-0xe],0x5
400573: mov WORD PTR [rbp-0xc],0x6
#define EXP ((short*)((char*)A + 7))
short *p = EXP;
400579: lea rax,[rbp-0x16] ; [rbp-0x16] is A
40057d: add rax,0x7
400581: mov QWORD PTR [rbp-0x8],rax ; [rbp-0x08] is p
short q = *EXP;
400585: movzx eax,WORD PTR [rbp-0xf] ; [rbp-0xf] is A plus 7 bytes
400589: mov WORD PTR [rbp-0xa],ax ; [rbp-0xa] is q
pshort(*p);
40058d: mov rax,QWORD PTR [rbp-0x8] ; [rbp-0x08] is p
400591: movzx eax,WORD PTR [rax] ; *p
400594: cwde
400595: mov edi,eax
400597: call 400527
pshort(q);
40059c: movsx eax,WORD PTR [rbp-0xa] ; [rbp-0xa] is q
4005a0: mov edi,eax
4005a2: call 400527
pshort(\*EXP);
4005a7: movzx eax,WORD PTR [rbp-0x10] ; [rbp-0x10] is A plus 6 bytes \*\*\*\*\*\*\*\*
4005ab: cwde
4005ac: mov edi,eax
4005ae: call 400527
printf("\n");
4005b3: mov edi,0xa
4005b8: call 400430
return 0;
4005bd: mov eax,0x0
}
4005c2: leave
4005c3: ret
```
---
* I get the same result with GCC 4.9.4 and GCC 5.5.0 from Docker hub
Upvotes: 2 <issue_comment>username_2: You are violating strict aliasing rules here. You can't just read half-way into an object and pretend it's an object all on its own. You can't invent hypothetical objects using byte offsets like this. GCC is perfectly within its rights to do crazy sh!t like going back in time and murdering Elvis Presley, when you hand it your program.
What you *are* allowed to do is inspect and manipulate the bytes that make up an arbitrary object, using a `char*`. Using that privilege:
```
#include
#include
int main()
{
short A[] = {1, 2, 3, 4, 5, 6};
short B;
std::copy(
(char\*)A + 7,
(char\*)A + 7 + sizeof(short),
(char\*)&B
);
std::cout << std::showbase << std::hex << B << std::endl;
}
// Output: 0x500
```
### ([live demo](http://coliru.stacked-crooked.com/a/5a09d40aac97b9b3))
But you can't just "make up" a non-existent object in the original collection.
Furthermore, even if you have a compiler that can be told to ignore this problem (e.g. with GCC's `-fno-strict-aliasing` switch), the made-up object is not correctly *aligned* for any current mainstream architecture. A `short` cannot legally live at that odd-numbered location in memory†, so you doubly can't pretend there is one there. There's just no way to get around how undefined the original code's behaviour is; in fact, if you pass GCC the `-fsanitize=undefined` switch it will tell you as much.
† I'm simplifying a little.
Upvotes: 6 [selected_answer]<issue_comment>username_3: The program has undefined behaviour due to casting an incorrectly aligned pointer to `(short*)`. This breaks the rules in 6.3.2.3 p6 in C11, which is nothing to do with strict aliasing as claimed in other answers:
>
> A pointer to an object type may be converted to a pointer to a different object type. If the resulting pointer is not correctly aligned for the referenced type, the behavior is undefined.
>
>
>
In [expr.static.cast] p13 C++ says that converting the unaligned `char*` to `short*` gives an unspecified pointer value, which might be an invalid pointer, which can't be dereferenced.
The correct way to inspect the bytes is through the `char*` not by casting back to `short*` and pretending there is a `short` at an address where a `short` cannot live.
Upvotes: 4
|